titotal

Computational Physicist
7631 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
622

I think there's an inherent limitation to the number of conservatives that EA can appeal to, because the fundamental values of EA are strongly in the liberal tradition. For example, if you believe the five foundations theory of moral values (which I think has at least a grain of truth to it), conservatives value tradition, authority and purity far more than liberals or leftists do: and in EA these values are (correctly, imo) not included as specific endgoals. An EA and a conservative might still end up agreeing on preserving certain traditions, but the EA will be doing so as a means to an end of increasing the general happiness of the population, not as a goal in of itself. 

Even if you're skeptical of these models of values, you can just look at a bunch of cultural factors that would be offputting to the run-of the mill conservative: EA is respectful of LGBT people including respecting transgender individuals and their pronouns, they have a large population of vegans and vegetarians, they say you should care about far off Africans just as much as your own neighbours. 

As a result of this, when EA and adjacent groups tries to be welcoming to conservatives, they don't end up getting your trump-voting uncle: they get unusual conservatives, such as mencius moldbug and the obsessive race-IQ people (the manifest conference had a ton of these). These are a small group of people and are by no means the majority, but even their presence in the general vicinity of EA is enough to disgust and deter many people from the movement. 

This puts EA in the worst of both worlds politically: the group of people that are comfortable with tolerating both trans people and scientific racists is miniscule, and it seriously hampers the ability to expand beyond the Sam Harris demographic. I think a better plan is to not compromise on progressive values, but be welcoming to differences on the economic front. 

titotal
47
24
19
1

I'd say a big problem with trying to make the forum a community space is that it's just not a lot of fun to post here. The forum has a dry and serious tone and voice that emulates that of academic papers, which communicates that this is a place for posting Serious and Important articles, while attempts at levity or informality often get downvoted, and god forbid you don't write in perfect grammatically correct English. Sometimes when I'm posting here I feel a pressure to act like a robot, which is not exactly conducive to community bonding. 

I didn't downvote you (and actually agree with you), but I'm assuming that the people who did justify it by the combative tone of your writing. 

Personally I think the forums are way too policing of overall tone. It punishes newcomers for not "learning" the dominant way of speaking (with the side-effect of punishing non native english speakers), and also deters things like humour that make a place actually pleasant to spend time around. 

According to this article, CEO shooter Luigi Malgione:

really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism

Doesn't look he was part of the EA movement proper (which is very clear about nonviolence), but could EA principles have played a part in his motivations, similarly to SBF? 

When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.

Though I think it would be a grave mistake to conclude from the fact that ChatGPT mostly complies with developer and user intent that we have any reliable way of controlling an actual machine superintelligence. The top researchers in the field say we don’t

The link you posted does not support your claim. The 24 authors of the linked paper contains some top AI researchers like Geoffrey Hinton and Stuart Russell, but it obviously does not contain all of them, and is obviously not a representative sample. It also contains people with limited expertise in the subject, including a psychologist and a medieval historian. 

In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit.  I doubt this applies to all people or even the majority, but it does seem like it's happened at least once. 

The EA space in general has fairly weak defenses against ideas that sound persuasive but don't actually hold up to detailed scrutiny. An initiative like this, if implemented correctly, seems like a step in the right direction.

I find it unusual that this end of year review contains barely any details of things you've actually done this year. Why should donors consider your organization as opposed to other AI risk orgs?

"It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."

Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don't think it's uncommon for someone who secretly suspects it's all a load of nonsense to diplomatically say a statement like the above, in "polite EA company".

Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly. 

Load more