Kinda pro-pluralist, kinda anti-Bay EA
(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)
Hi Jeff,
Yes, this roughly comes under the idea of the 4th bullet point at the end on 'further data cleaning'. I think different people could obviously slice the data differently at what belongs in each topic (e.g. Is Longtermism it's own category, or part of philosophy?), but it is a chart I'll probably get around to creating.
As for sharing the data, absolutely! I'm happy to share it with anyone who asks really, as long as they understand the caveats to it I mention for interpretation's sake. If you'd like to have a look, DM me and we can set something up :)
As for your earlier comment, it's an interesting idea that the EA 'core' represents all parts of the philosophy, and that newer entrants have been more drawn in my the longtermism-side. I think a lot of people have the anecdotal experience that people often make their way to EA through the 'neartermist'/global-health stuff, and often experience a 'rug pull' when introduced to xRisk/AI stuff. I'm not sure there's good data on that, maybe when Rethink can share more from the latest EA survey?
Hey Sol, some thoughts on this comment:
I don't think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I'm not taking the position you think I am.
That's my interpretation of course, please correct me if I've misunderstood
The HLI discussion on the Forum recently felt off to me, bad vibes all around. It seems very heated, not a lot of scout mindset, and reading the various back-and-forth chains I felt like I was 'getting Eulered' as Scott once described.
I'm not an expert on evaluating charities, but I followed a lot of links to previous discussions and found this discussion involving one of the people running an RCT on Strongminds (which a lot of people are waiting for the final results of) who was highly sceptical of SM efficacy. But the person offering counterarguments in the thread seems to be just as valid to me? My current position, for what it's worth,[1] is:
I'm also quite unsettled by a lot of what I call 'drive-by downvoting'. While writing a comment is a lot more effort than clicking to vote on a comment/post, I think the signal is a lot higher, and would help those involved in debates reach consensus better. Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).
So I'm very unsure how to feel. It's an important issue, but I'm not sure the Forum has shown itself in a good light in this instance.
And I stress this isn't much in this area, I generally defer to evaluators
On the table at the top of the link, go to the column 'GiveWell best guess' and the row 'Cost-effectiveness, relative to cash'
Again, I don't think I have the ability to adjudicate here, which is part of why I'm so confused.
To explain my disagree-vote, this kind of explanation isn't a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don't find it convincing without further evidence.
consciously or not
I just wanted to say that this was a fantastic post, and one of the best reads (imo) on the Forum this year.
I've never been to the Bay or interacted in person with this culture, so I'd be very interested to hear to what extent other EAs on the Forum think that these perceptions are accurate.[1]
In general it makes me appreciate coming to EA (at least the community side, as opposed to awareness of the philosophy) later in life - it means that my professional and personal life isn't so highly entangled with EA to the extent that seems to be causing a lot of dissonance and distress in these anecdotes.
I do think that some of these 'corrupting influences' are things that happen naturally in any human society and hierarchy (e.g. The Seeker's Game vignette itself - the phrase "It's not what you know it's who you know" is a common idiom for a reason!), but there do seem to be reasons why these concerns seem to be worse in the Bay than in other EA areas atm.
Only if you're comfortable sharing ofc
Adding a +1 to Nathan's reaction here, this seems to have been some of the harshest discussion on the EA Forum I've seen for a while (especially on an object-level case).
Of course, making sure charitable funds are doing the good that the claim is something that deserves attention, research, and sometimes a critical eye. From my perspective of wanting more pluralism in EA, it seems[1] to me that HLI is a worthwhile endeavour to follow (even if its programme ends with it being ~the same or worse than cash transfers). Of all the charitable spending in the world, is HLI's really worth this much anger?
It just feels like there's inside baseball that I'm missing here.
weakly of course, I claim no expertise or special ability in charity evaluation
I'd be very interested to read up a post about your thoughts about this (though I'm not sure what 'ITT' means in this context?) and I'm curious about which SSC post that you're referring to.
I also want to say I'm not sure how universal the 'EAs have been caught so off guard' claim is. Some have sure, but plenty were hoping the the AI risk discussion stays out of the public sphere for exactly this kind of reason.
Just want to register some disagreement here about the name change, to others in this thread and Will (not just you Gemma!). In rough order of decreasing importance:
I do accept it was just a small draft suggestion though.
While I agree 'no first strikes' is good, my prior is that EA communications currently has a 'no retaliation at all' policy, which I think is a very bad one (even if unofficial - I buy Shakeel's point that there may have been a diffusion of responsibility around this)
So for clarification, do you think that CEA ought to adopt this policy just because it is a good thing to do, or because they/other EAs have broken this rule and it needs to be a clearer norm? If the latter, I'd love to see some examples, because I can't really think of any (at least from 'official' EA orgs, and especially the CEA comms team)
On the other hand, I can think of many examples, some from quite senior figures/academics, absolutely attacking EA in an incredibly hostile way, and basically being met with no pushback from official EA organisations or 'EA leadership' however defined.
There's not going to be a one-size-fits-all answer to this. EA (implicitly and explicitly) criticises how many other worldviews see the world, and as such we get a lot of criticism back. However, it is a topic I've thought a bit about, so here are some best guesses at the 'visions' of some of our critics put into four groups. [Note: I wrote this up fairly quickly, so please point out any disagreements, mistakes, or suggest additional groups that I've missed]
1: Right-of-centre Libertarians: Critics from this school may think kind of well of EAs intentions, but think we are naïve and/or hubristic, and place us in the a tradition of thought that relies on central planning rather than market solutions. They'd argue along the lines of the most efficient interventions being the spread of markets and the rule of law rather than charities. They may also, if on the more social conservative end, believe that social traditions capture cultural knowledge than can't be captured by quantification or first-principles reasoning. Example critic: Tyler Cowen
2: Super Techno-Optimistic Libertarians: This set thinks that EA has been captured by 'wokeness'/'AI doomers'/whatever Libertarian boogeyman you can think of here. Generally dismissive of EAs, EA institutions, and not really willing to engage on object-level discussions in my experience. Their favoured intervention is probably cutting corporate taxes, removing regulations, and increased funding on AI capabilities so we can go as fast as possible to reap the huge benefits they expect.
In a way, this group acts as a counter-point to some other EA critics, who don't see a true distinction between us and this group, perhaps because many of them live in the Bay and are socially similar to/entangled with EAs there. Example critic: Perry Metzger/Mark Andreessen
3: Decentralised Democrats: There are some similarities to group 1 here, in the sense that critics in this group think that EAs are too technocratic. Sources of disagreement here include pragmatic ones: they are likely to believe that social institutions are not adapted to the modern world to such a degree that fixing them is higher priority than 'core EA' think, normative ones: they likely believe that decisions that will have a large impact over the future deserve the consent of as much of the world as possible and not just the acceptance of whatever EA thinks, and sociological ones: if I had to guess, I'd say they're more central-left/liberaltarian than other EA critics. Very likely to think that distinguishing from EA-as-belief and EA-as-institutions is a false distinction, and very supportive of reforms to EA including community democratisation. Example critic: E. Glen Weyl/Zoe Cremer
4: Radical Progressives/Anti-capitalists: This group is probably the one that you're thinking of in terms of 'our biggest critics', and they've been highly critical of EA since the beginning. They generally believe EA to be actively harmful, and usually ascribe this to either deliberate design or EA being blind to its support of oppressive ideologies/social structures. There's probably a lot of variation in what kind of world they do want, but it's likely to be a very radical departure, probably involving mass cultural and social change (perhaps revolutionary change), ending capitalism as it is currently constituted, and more money, power, and support being given to the State to bring about positive changes.
There is a lot of variation in this group, though you can pick up on some common themes (e.g. a more Hickel-esque view of human progress, compared to a more 'Pinkerite' view that EA might have), common calls-to-action (climate change is probably the largest/most important cause area here). I suggest you don't take my word for it and read them yourself,[1] but I think you won't find much in terms of practical policy suggestions - perhaps because that's seen as "working within a fatally flawed system", but some in this group are more moderate. Example critic: Alice Crary/Emile Torres/Jason Hickel
Though I must admit, I find reading criticism from this group very demotivating - lots of it seems to me to be bad faith, shallowly researched, assuming bad intentions from EAs, or avoiding object-level debates on purpose. YMMV though.