Thanks for writing this--even though I've been familiar with AI x-risk for a while, it didn't really hit me on an emotional level that dying from misaligned AI would happen to me too, and not just "humanity" in the abstract. This post changed that.
Might eventually be useful to have one of these that accounts for biorisk too, although biorisk "timelines" aren't as straightforward as trying to estimate the date that humanity builds the first AGI.
Thanks, great points (and counterpoints)!
If you are a community builder (especially one with a lot of social status), be loudly transparent with what you are building your corner of the movement into and what tradeoffs you are/aren’t willing to make.
I like this suggestion--what do you imagine this transparency looks like? Do you think, e.g., EA groups should have pages outlining their community-building philosophies on their websites? Should university groups should write public Forum posts about their plans and reasoning before every semester/quarter or academic year? Would you advocate for more community-building roundtables at EAGs? (These are just a few possible example modalities of transparency that just came to my head, very interested in hearing more.)
Thanks for the comment! I agree with your points--there are definitely elements of EA, whether they're core to EA or just cultural norms within the community, that bear stronger resemblances to cult characteristics.
My main point in this post was to explore why someone who hasn't interacted with EA before (and might not be aware of most of the things you mentioned) might still get a cult impression. I didn't mean to claim that the Google search results for "altruism" are the most common reason why people come away with a cult impression. Rather, I think that they might explain a few perplexing cases of cult impressions that occur before people become more familiar with EA. I should have made this distinction clearer, thanks for pointing it out :)
Hey Jordan! Great to see another USC person here. The best writing advice I've gotten (that I have yet to implement) is to identify a theory of change for each potential piece--something to keep in mind!
6 sounds interesting, if you can make a strong case for it. Aligning humans isn't an easy task (as most parents, employers, governments, and activists know very well), so I'm curious to hear if you have tractable proposals.
7 sounds important given that a decent number of EAs are vegan, and I'm quite surprised I haven't heard of this before. 15 IQ points is a whole standard deviation, so I'd love to see the evidence for that.
8 might be interesting. I suspect most people are already aware of groupthink, but it could be good to be aware of other relevant phenomena that might not be as widely-known (if there are any).
From what I can tell, 11 proposes a somewhat major reconsideration of how we should approach improving the long-term future. If you have a good argument, I'm always in favor of more people challenging the EA community's current approach. I'm interested in 21 for the same reason.
(In my experience, the answer to 19 is no, probably because there isn't a clear, easy-to-calculate metric to use for longtermist projects in the way that GiveWell uses cost-effectiveness estimates.)
Out of all of these, I think you could whip up a draft post for 7 pretty quickly, and I'd be interested to read it!
Thanks Linch! This list is really helpful. One clarifying question on this point:
Relatedly, what does the learning/exploration value of this project look like?
- To the researcher/entrepreneur?
- To the institution? (if they're working in an EA-institutional context)
- To the EA or longtermist ecosystem as a whole?
For 1) and 2), I assume you're referring to the skills gained by the person/institution completing the project, which they could then apply to future projects.
For 3), are you referring to the possibility of "ruling out intervention X as a feasible way to tackle x-risks"? That's what I'm assuming, but I'm just asking to make sure I understand properly.
Thanks again!
This thinking has come up in a few separate intro fellowship cohorts I’ve facilitated. Usually, somebody tries to flesh it out by asking whether it’s “more effective” to save one doctor (who could then be expected to save five more lives) or two mechanics (who wouldn’t save any other lives) in trolley-problem scenarios. This discussion often gets muddled, and many people have the impression that “EAs” would think it’s better to save the doctor, even though I doubt that’s a consensus opinion among EAs. I’ve found this to be a surprisingly large snag point that isn’t discussed much in community-building circles.
I think it would be worth it to clarify the difference between intrinsic and instrumental value in career advice/intro fellowships/other first interactions with the EA community, because there are some people who might agree with other EA ideas but find that this argument undermines our basic principles (as well as the claim that you don’t need to be utilitarian to be an EA). Maybe we could extend current messaging about ideological diversity within EA.
That said, I read Objection 4 differently. Many people (especially in cultures that glorify work) tie their sense of self-worth to their jobs. I don’t know how universal this is, but at least in my middle-class American upbringing, there was a strong sense that your career choice and achievement is a large part of your value as a person.
As a result, some people feel personally judged when their intended careers aren’t branded as “effective”. If you equate your career value with your personal value, you won’t feel very good if someone tells you that your career isn’t very valuable, and so you’ll resist that judgment.
I don’t think that this feeling precludes people from being EAs. It takes time to separate yourself from your current or intended career, and Objection 4 strikes me as a knee-jerk defensive reaction. Students planning to work in shipping logistics won’t immediately like the idea that the job they’ve been working hard to prepare for is “ineffective,” but they might come around to it after some deeper reflection.
I could be misreading Objection 4, though. It could also mean something like “shipping logistics is valuable because the world would grind to a halt if nobody worked in shipping logistics,” but then that’s just a variant of Objection 5.
I’m very curious to know more about the sense in which these students gave Objection 4.
I'm curious about what's the original source of the funding you're giving out here. According to this Nonlinear received $250k from Future Fund and $600k from Survival and Flourishing Fund. Is the funding being distributed here coming solely from the SFF grant? Does Nonlinear have other funding sources besides Future Fund and SFF?
(I didn't do any deeper dive than looking at Nonlinear's website, where I couldn't find anything about funding sources.)