(Cross-posted from my website.)
I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don’t claim to have a complete picture of the university group ecosystem.
Disclaimer: I’ve written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal.
My EA Experience
During my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship.
After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right—it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia.
After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat.
EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat.
I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I'd hear these ideas, I didn't really know what to do besides nod my head and occasionally say "that makes sense." After each one-on-one, I knew that I shouldn't update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn't help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety.
It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don't prioritize AI safety, the burden of proof is on me to explain why I "dissent" from EA. If you're a longtermist AI safety person, there's no need to offer evidence to defend your view.
(I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets super excited and thinks to themselves “how can I accelerate this person on their path to impact?” The naïve answer is to point them only towards upskilling and internship opportunities. Asking the newbie why they prioritize AI safety may not seem immediately useful and may even convince them not to prioritize AI safety, God forbid!)
I became President of Columbia EA shortly after returning home from the EAG SF and the retreat, and I'm afraid I did some suboptimal community building. Here are two mistakes I made:
- In the final week of the Arete Fellowship (I was facilitating), I asked the participants what they thought the most pressing problem was. One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on). I think that poor epistemics may often be a central part of the mechanism that leads people to prioritize AIS after completing the Arete Fellowship. Unfortunately, rather than flagging this as epistemically shaky and supporting those members to better develop their epistemics, I instead dedicated my time and resources to push them to apply to EAG(x)'s, GCP workshops, and our other advanced fellowships. I did not follow up with the others in the cohort.
- I hosted a retreat with students from Columbia, Cornell, NYU, and UPenn. All participants were new EAs (either still completing Arete or just finished Arete). I think I felt pressure to host a retreat because "that's what all good community builders do." The social dynamics at this retreat were pretty solid (in my opinion), but afterwards I felt discontent. I had not convinced any of the participants to take EA seriously, and I felt like I had failed. Even though I knew that convincing people of EA wasn't necessarily the goal, I still implicitly aimed for that goal.
I served as president for a year and have since stepped down and dissociated myself from EA. I don't know if/when I will rejoin the community, but I was asked to share my concerns about EA, particularly university groups, so here they are!
Epistemic Problems in Undergraduate EA Communities
Every highly engaged EA I know has converged on AI safety as the most pressing problem. Whether or not they have a background in AI, they have converged on AI safety. The notable exceptions are those who were already deeply committed to animal welfare or those who have a strong background in biology. The pre-EA animal welfare folks pursue careers in animal welfare, and the pre-EA biology folks pursue careers in biosecurity. To me, some of these notable exceptions may not have performed rigorous cause prioritization. For students who converge on AI Safety, I also think it's unlikely that they have performed rigorous cause prioritization. I don't think this is that bad because cause prioritization is super hard, especially if your cause prioritization leads you to work on a cause you have no prior experience in. But, I am scared of a community that emphasizes the importance of cause prioritization yet few people actually cause prioritize.
Perhaps, people are okay with deferring their cause prioritization to EA organizations like 80,000 Hours, but I don't think many people would have the guts to openly admit that their cause prioritization is a result of deferral. We often think of cause prioritization as key to the EA project and to admit to deferring on one's cause prioritization is to reject a part of the Effective Altruism project. I understand that everyone has to defer on significant parts of their cause prioritization, but I am very concerned with just how little cause prioritization seems to be happening at my university group. I think it would be great if more university group organizers encourage their members to focus on cause prioritization. I think if groups started organizing writing fellowships where people focus on working through their cause prioritization, we could make significant improvements.
My Best Guess on Why AI Safety Grips Undergraduate Students
The college groups that I know best, including Columbia EA, seem to function as factories for churning out people who care about existential risk reduction. Here's how I see each week of the Arete (Intro) Fellowship play out.
- Woah! There's an immense opportunity to do good! You can use your money and your time to change the world!
- Wow! Some charities are way better than others!
- Empathy! That's nice. Let's empathize with animals!
- Doom! The world might end?! You should take this more seriously than everything we've talked about before in this fellowship
- Longtermism! You should care about future beings. Oh, you think that's a weird thing to say? Well, you should take ideas more seriously!
- AI is going to kill us all! You should be working on this. 80k told me to tell you that you should work on this.
- This week we'll be discussing WHAT ~YOU~ THINK! But if you say anything against EA, I (your facilitator) will lecture for a few minutes defending EA (sometimes rightfully so, other times not so much)
- Time to actually do stuff! Go to EAG! Go to a retreat! Go to the Bay!
I'm obviously exaggerating what the EA fellowship experience is like, but I think this is pretty close to describing the dynamics of EA fellowships, especially when the fellowship is run by an inexperienced, excited, new organizer. Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers. I am especially fearful that almost every person who becomes highly engaged due to their college group is going to have world views and cause prioritizations that are strikingly similar to those who compiled the EA handbook (intro fellowship syllabus) and AGISF.
It may be that AI safety is in fact the most important problem of our time, but there is an epistemic problem in EA groups that cannot be ignored. I’m not willing to trade off epistemic health for churning out more excellent AI safety researchers (This is an oversimplification. I understand that some of the best AI researchers have excellent epistemics as well). Some acclaimed EA groups might be excellent at churning out competent AI safety prioritizers, but I would rather have a smaller, epistemically healthy group that embarks on the project of effective altruism.
Caveats
I suspect that I overestimate how much facilitators influence fellows' thinking. I think that the people who become highly engaged don't become highly engaged because their facilitator was very persuasive (persuasiveness is a smaller part); rather, people become highly engaged because they already had worldviews that mapped closely to EA.
How Retreats Can Foster an Epistemically Unhealthy Culture
In this section, I will argue that retreats cause people to take ideas seriously when they perhaps shouldn't. Retreats make people more susceptible to buying into weird ideas. Those weird ideas may in fact be true, but the process of buying into those weird ideas rests on shaky epistemics grounds.
Against Taking Ideas Seriously
According to LessWrong, "Taking Ideas Seriously is the skill/habit of noticing when a new idea should have major ramifications." I think taking ideas seriously can be a useful skill, but I'm hesitant when people encourage new EAs to take ideas seriously.
Scott Alexander warns against taking ideas seriously:
for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy. Or, at the very least, it should be the last skill you learn, after you’ve learned every other skill that allows you to know which ideas are or are not correct. The people I know who are best at taking ideas seriously are those who are smartest and most rational. I think people are working off a model where these co-occur because you need to be very clever to resist your natural and detrimental tendency not to take ideas seriously. But I think they might instead co-occur because you have to be really smart in order for taking ideas seriously not to be immediately disastrous. You have to be really smart not to have been talked into enough terrible arguments.
Why Do People Take Ideas Seriously in Retreats?
Retreats are sometimes believed to be one of the most effective university community building strategies. Retreats heavily increase people's engagement with EA. People cite retreats as being key to their onramp to EA and taking ideas like AI safety, x-risks, and longtermism more seriously. I think retreats make people take ideas more seriously because retreats disable people's epistemic immune system.
- Retreats are a foreign place. You might feel uncomfortable and less likely to “put yourself out there." Disagreeing with the organizers, for example, “puts you out there." Thus, you are unlikely to dissent from the views of the organizers and speakers. You may also paper over your discontents/disagreements so you can be part of the in-group.
- When people make claims confidently about topics you know little about, there's not much to do. For five days, you are bombarded with arguments for AI safety, and what can you do in response? Sit in your room and try to read arguments and counterarguments so you can be better prepared to talk about these issues the next day? Absolutely not. The point of this retreat is to talk to people about big ideas that will change the world. There’s not enough time to do the due diligence of thinking through all the new, foreign ideas you’re hearing. At this retreat, you are encouraged to take advantage of all the networking opportunities. With no opportunity to do your due diligence to read into what people are confidently talking about, you are forced to implicitly trust your fellow retreat participants. Suddenly, you will have unusually high credence in everything that people have been talking about. Even if you decide to do your due diligence after the retreat, you will be fighting an uphill battle against your unusually high prior on those "out there" takes from those really smart people at the retreat.
Other Retreat Issues
- Social dynamics are super weird. It can feel very alienating if you don't know anyone at the retreat while everyone else seems to know each other. More speed friending with people you’ve never met before would be great.
- Lack of psychological safety
- I think it's fine for conversations at retreats to be focused on sharing ideas and generating impact, but it shouldn't feel like the only point of the conversation is impact. Friendships shouldn't feel centered around impact. It’s a bad sign if people feel that they will jeopardize a relationship if they stop appearing to be impactful.
- The pressure to appear to be “in the know” and send the right virtue signals can be overwhelming, especially in group settings.
- Not related to retreats but similar: sending people to the Bay Area is weird. Why do people suddenly start to take longtermist, x-risk, AI safety ideas more seriously when they move to the Bay? I suspect moving to the Bay Area has similar effects as going to retreats.
University Group Organizer Funding
University group organizers should not be paid so much. I was paid an outrageous amount of money to lead my university's EA group. I will not apply for university organizer funding again even if I do community build in the future.
Why I Think Paying Organizers May Be Bad
- Being paid to run a college club is weird. All other college students volunteer to run their clubs. If my campus newspaper found out I was being paid this much, I am sure an EA take-down article would be published shortly after.
- I doubt paying university group organizers this much is increasing their counterfactual impact much. I don't think organizers are spending much more time because of this payment. Most EA organizers are from wealthy backgrounds, so the money is not clearing many bottlenecks (need-based funding would be great—see potential fixes section).
- Getting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid. I'd be curious to read the results of the university group organizer funding exit survey to learn more about how impactful the funding was.
Potential Solutions
- Turn the University Group Organizer Fellowship into a need-based fellowship. This is likely to eliminate financial bottlenecks in people's lives and accelerate their path to impact, while not wasting money on those who do not face financial bottlenecks.
- If the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing people's commitment to quality community building, then reduce funding to $15/hour (I’m just throwing this number out there; bottom line is reduce the hourly rate significantly). If the results indicate that funding had little to no impact, abandon funding (not worth the reputational risks and weirdness). I think it’s unlikely that the results of the survey indicate that the funding was exceptionally impactful.
Final Remarks
I found an awesome community at Columbia EA, and I plan to continue hanging out with the organizers. But I think it’s time I stop organizing for my mental health and the reasons outlined above. I plan to spend the next year focusing on my cause prioritization and building general competencies. If you are a university group organizer and have concerns about your community’s health, please don’t hesitate to reach out.
Hey,
I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”.
One thing I’ll say is that core researchers ... (read more)
As someone who is extremely pro investing in big-tent EA, my question is, "what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"
I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their actions and movement building outside the EA umbrella. In addition, EA being ahead of the curve on AIS is, in my opinion, a fact to embrace and treat as evidence of the value of EA principles, individuals, and movement building methodology.
To avoid AIS eating EA, we have to keep reinvesting in EA fundamentals. I am so grateful and impressed that Dave published this post, because it's exactly the kind of effort that I think is necessary to keep EA EA. I think he highlights specific failures in exploiting known methods of inducing epistemic ... untetheredness?
For example, I worked with CFAR where the workshops deliberately em... (read more)
"what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"
Creating AI Safety focused Conferences, AI Safety university groups and AI Safety local meet-up groups? Obviously attendees will initially overlap very heavily with EA conferences and groups but having them separated out will lead to a bit of divergence over time
Wouldn't this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I'm happy that at least there's still conferences, groups and meet-ups where these different people are still talking to each other!
There might be an important trade-off here, and it's not clear to me what direction makes more sense.
Or, the ideal form for the AI safety community might not be a "movement" at all! This would be one of the most straightforward ways to ward off groupthink and related harms, and it has been possible for other cause areas, for instance, global health work mostly doesn't operate as a social movement.
Global health outside of EA may not have the issues associated with being a movement, but it has even bigger issues.
I wonder how this would look different from the current status quo:
- Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn't advertise what it's been used for to date)
- Projects already seem to be highly preferentially supported based on how longtermist/AI-themed they are. I recently had a conversation with someone at OpenPhil in which, if I understood/remembered correctly, they said the proportion of OP funding going to nonlongtermist stuff was about 10%. [ETA sounds like this is wrong]
- The global health and development fund seems to have been discontinued . The infrastructure fund, I've heard on the grapevine, strongly prioritises projects with a longtermist/AI focus. The other major source of money in the EA space is the Survival and Flourishing Fund, which lists its goal as 'to
... (read more)Regarding the funding aspect:
Holden also stated in his recent 80k podcast episode that <50% of OP's grantmaking goes to longtermist areas.
The theory of change for community building is much stronger for long-termist cause areas than for global poverty.
For global poverty, it's much easier to take a bunch of money and just pay people outside of the community to do things like hand out bed nets.
For x-risk, it seems much more valuable to develop a community of people who deeply care about the problem so that you can hire people who will autonomously figure out what needs to be done. This compares favourably to just throwing money at the problem, in which case you’re just likely to get work that sounds good, rather than work advancing your objective.
The flipside argument would be that funding is a greater bottleneck for global poverty than longtermism, and one might convince university students focused on global poverty to go into earning-to-give (including entrepreneurship-to-give). So the goals of community building may well be different between fields, and community building in each cause area should be primarily judged on its contribution to that cause area's bottleneck.
Not really responding to the comment (sorry), just noting that I'd really like to understand why these researchers at GPI and careful-thinking AI alignment people - like Paul Christiano - have such different risk estimates! Can someone facilitate and record a conversation?
David Thorstadt, who worked at GPI, Blogs about reasons for his Ai skepticism (and other EA critiques) here https://ineffectivealtruismblog.com/
The object-level reasons are probably the most interesting and fruitful, but for a complete understanding of how the differences might arise, it's probably also valuable to consider:
An interesting exercise could be to go through the categories and elucidate 1-3 reasons in each category for why AI alignment people might believe X and cause prio people might believe not X.
This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?
I think (apologies if I am mis-understanding you) you try to get around this by suggesting that 'mainstream' causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.
I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background
(This isn't an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)
I guess I'm unclear about what sort of background is important. ML isn't actually that sophisticated as it turns out, it could have been, but "climb a hill" or "think about an automata but with probability distributions and annotated with rewards" just don't rely on more than a few semesters of math.
2/5 doesn’t seem like very strong evidence of groupthink to me.
I also wouldn’t focus on their background, but on things like whether they were able to explain the reasons for their beliefs in their own words or tended to simply fall back on particular phrases they’d heard.
(I lead the CEA uni groups team but don’t intend to respond on behalf of CEA as a whole and others may disagree with some of my points)
Hi Dave,
I just want to say that I appreciate you writing this. The ideas in this post are ones we have been tracking for a while and you are certainly not alone in feeling them.
I think there is a lot of fruitful discussion in the comments here about strategy-level considerations within the entire EA ecosystem and I am personally quite compelled by many of the points in Will’s comment. So, I will focus specifically on some of the considerations we have on the uni group level and what we are trying to do about this. (I will also flag that I could say a lot more on each of these but my response was already getting quite long and we wanted to keep it somewhat concise)
Epistemics
- We are also quite worried about epistemic norms in university groups. We have published some of our advice around this on the forum here (though maybe we should have led with more concrete examples) and I gave a talk at EAG Bay Area on it.
- We also try to screen that people actually understand the arguments behind the claims they are making & common argumen
... (read more)Hi Dave,
Thanks for taking the time to write this. I had an almost identical experience at my university. I helped re-start the club, with every intention to lead the club, but I am no longer associated with it because of the lack of willingness from others to engage with AI safety criticisms or to challenge their own beliefs regarding AI safety/Existential risk.
I also felt that those in our group who prioritized AI safety had an advantage as far as getting recognition from more senior members of the city group, ability to form connections with other EAs in the club, and to get funding from EA orgs. I was quite certain I could get funding from the CEA too, as long as I lied and said I prioritized AI safety/Existential risk, but I wasn’t willing to do that. I also felt the money given to other organizers in the club was not necessary and did not have any positive outcomes other than for that individual.
I am now basically fully estranged from the club (which sucks, because I actually enjoyed the company of everyone) because I do not feel like my values, and the values I originally became interested in EA for (such as epistemic humility) exist in the space I was in.
I did manage to have... (read more)
Thanks for writing this. This comment, in connection with Dave's, reminds me that paying people -- especially paying them too much -- can compromise their epistemics. Of course, paying people is often a practical necessity for any number of reasons, so I'm not suggesting that EA transforms into a volunteer-only movement.
I'm not talking about grift but something that has insidious onset in the medical sense: slow, subtle, and without the person's awareness. If one believes that financial incentives matter (and they seemingly must for the theory of change behind paying university organizers to make much sense), it's important to consider the various ways in which those incentives could lead to bad epistemics for the paid organizer.
If student organizers believe they will be well-funded for promoting AI safety/x-risk much more so than broad-tent EA, we would expect that to influence how they approach their organizing work. Moreover, reduction of cognitive dissonance can be a powerful drive -- so the organizer may actually (but subconsciously) start favoring the viewpoint they are emphasizing in order to reduce that dissonance rather than for sound reasons. If a significant number... (read more)
It seems like a lot of criticism of EA stems from concern about "groupthink" dynamics. At least, that is my read on the main reason Dave dislikes retreats. This is a major concern of mine as well.
I know groups like CEA and Open Phil have encouraged and funded EA criticism. My difficulty is I don't know where to find that criticism. I suppose the EA forum frequently posts criticisms, but fighting groupthink by reading the forum seems counter productive.
I've personally found a lot of benefit in reading Reflective Altruism's blog.
What I'm saying is, I know EA orgs want to encourage criticism, and good criticisms do exit, but I don't think orgs have found a great way to disseminate those criticisms yet. I would want criticism dissemination to be more of a focus.
For example, there is an AI Safety reading list an EA group put out. It's very helpful, but I haven't seen any substantive criticism linked to in that list, while arguments in favor of longtermism comprise most of the lists.
I've only been to a handful of the conferences, but I've not seen a "Why to be skeptical of longtermism" talk posted.
Has there been an 80k podcast episode that centers longtermism skepticism befor... (read more)
If you're an animal welfare EA I'd highly recommend joining the wholesome refuge that is the newly minted Impactful Animal Advocacy (IAA).
Website and details here. I volunteered for them at the AVA Summit which I strongly recommend as the premier conference and community-builder for animal welfare-focused EAs. The AVA Summit has some features I have long thought missing from EAGs - namely people arguing in good faith about deep deep disagreements (e.g. why don't we ever see a panel with prominent longtermist and shorttermist EAs arguing for over an hour straight at EAGs?). There was an entire panel addressing quantification bias which turned into talking about some believing how EA has done more harm than good for the animal advocacy movement... but that people are afraid to speak out against EA given it is a movement that has brought in over 100 million dollars to animal advocacy. Personally I loved there being a space for these kind of discussions.
Also, one of my favourite things about the IAA community is they don't ignore AI, they take it seriously and try to think about how to get ahead of AI developments to help animals. It is a community where you'll bump into people who can talk about x-risk and take it seriously, but for whatever reason are prioritizing animals.
People have been having similar thoughts to yours for many years, including myself. Navigating through EA epistemic currents is treacherous. To be sure, so is navigating epistemic currents in lots of other environments, including the "default" environment for most people. But EA is sometimes presented as being "neutral" in certain ways, so it feels jarring to see that it is clearly not.
Nearly everyone I know who has been around EA long enough to do things like run a university group eventually confronts the fact that their beliefs have been shaped socially by the community in ways that are hard to understand, including by people paid to shape your beliefs. It's challenging to know what to do in light of that. Some people reject EA. Others, like you, take breaks to figure things out more for themselves. And others press on, while trying to course correct some. Many try to create more emotional distance, regardless of what they do. There's not really an obvious answer, and I don't feel I've figured it fully out myself. All this is to just say: you're not alone. If you or anyone else reading this wants to talk, I'm here.
Finally, I really like this related post, as well as this comment... (read more)
I'm really glad you chose to make this post and I'm grateful for your presence and insights during our NYC Community Builders gatherings over the past ~half year. I worry about organizers with criticisms leaving the community and the perpetuation of an echo chamber, so I'm happy you not only shared your takes but also are open to resuming involvement after taking the time to learn, reflect, and reprioritize.
Adding to the solutions outlined above, some ideas I have:
• Normalize asking people, "What is the strongest counterargument to the claim you just made?" I think this is particularly important in a university setting, but also helpful in EA and the world at large. A uni professor recently told me one of the biggest recent shifts in their undergrad students has been a fear of steelmanning, lest people incorrectly believe it's the position they hold. That seems really bad. And it seems like establishing this as a new norm could have helped in many of the situations described in the post, e.g. "What are some reasons someone who knows everything you do might not choose to prioritize AI?" • Greater support for uni students trialing projects through their club, including projects spann... (read more)
I remember speaking with a few people that were employed doing AI-type EA work (people who appear to have fully devoted their careers to the mainstream narrative of EA-style longtermism). I was a bit surprised that when I asked them "What are the strongest arguments against longtermism" none were able to provide much of an answer. I was perplexed that people who had decided to devote their careers (and lives?) to this particular cause area weren't able to clearly articulate the main weaknesses/problems.
Part of me interpreted this as "Yeah, that makes sense. I wouldn't be able to speak about strong arguments against gravity or evolution either, because it seems so clear that this particular framework is correct." But I also feel some concern if the strongest counterargument is something fairly weak, such as "too many white men" or "what if we should discount future people."
Mad props for going off anon. Connecting it to your resignation from columbia makes me take you way more seriously and is a cheap way to make the post 1000x more valuable than an anon version.
This is odd to me because I have a couple memories of feeling like sr EAs were not taking me seriously because I was being sloppy in my justification for agreeing with them. Though admittedly one such anecdote was pre-pandemic, and I have a few longstanding reason to expect the post-pandemic community builder industrial complex would not have performed as well as the individuals I'm thinking about.
Can confirm that:
"sr EAs [not taking someone seriously if they were] sloppy in their justification for agreeing with them"
sounds right based on my experience being on both sides of the "meeting senior EAs" equation at various times.
(I don't think I've met Quinn, so this isn't a comment on anyone's impression of them or their reasoning)
I think that a very simplified ordering for how to impress/gain status within EA is:
Looking back on my early days interacting with EAs, I generally couldn't present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I'm not sure about what hurdles to overcome if you want EA communities to push towards 'Agreement sloppily justified' and 'Disagreement sloppily justified' being treated similarly.
Thanks for making this post. Many commenters are disputing your claim that "Being paid to run a college club is weird", and I want to describe why I think it is in fact distorting.
One real reason you don't want to pay the leadership of a college club a notably large amount of money is because you expose yourself to brutal adverse selection: the more you pay above the market rate for a campus job, the more attractive the executive positions are to people who are purely financially motivated rather than motivated by the mission of the club. This is loosely speaking a problem faced by all efforts to hire everywhere, but is usually resolved in a corporate environment through having precise and dispassionate performance evaluation, and the ability to remove people who aren't acting "aligned", if you will. I think the lack of mechanisms like this at college org level basically mean this adverse selection problem blows up, and you simply can't bestow excess money or status on executives without corrupting the org. I saw how miserable college-org politics were in other settings, with a lot less money to go around than EA.
At the core of philanthropic mission is a principal-agent probl... (read more)
To the best of my knowledge, I don't think Columbia EA gives out salaries to their "executives." University group organizers who meet specific requirements (for instance, time invested per week) can independently apply for funding and have to undergo an application and interview process. So, the dynamics you describe in the beginning would be somewhat different because of self-selection effects; there isn't a bulletin board or a LinkedIn post where these positions are advertised. I say somewhat because I can imagine a situation where a solely money-driven individual gets highly engaged in the club, learns about the Group Organizer Fellowship, applies, and manages to secure funding. However, I don't expect this to be that likely.
For group funding, at least, there are strict requirements for what money can and cannot be spent on. This is true for most university EA clubs unless they have an independent funding source.
All that said, I agree that "notably large amount[s] of money" for university organizers is not ideal.
For what it's worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven't observed any of the points you mentioned in my experience with the EA group:
Which university EA groups specifically did you talk to before proclaiming "University EA Groups Need Fixing"? Based only on what I read in your article, a more accurate title seems to be "Columbia EA Needs Fixing"
I feel it is important to mention that this isn't supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.
Thanks for writing this, these are important critiques. I think it can be healthy to disengage from EA in order to sort through some of the weird ideas for yourself, without all the social pressures.
A few comments:
I actually don't think it's that weird to pay organizers. I know PETA has a student program that pays organizers, and The Humane League once did this too. I'd imagine you can find similar programs in other movements, though I don't know for sure.
I suspect the amount that EA pays organizers is unusual though, and I strongly agree with you that paying a lot for university organizing introduces weird and epistemically corrosive incentives. The PETA program pays students $60 per event they run, so at most ~$600 per semester. Idk exactly how much EA group leaders are paid, but I think it's a lot more than that.
I definitely share your sense that EA's message of "think critically about how to do the most good" can sometimes feel like code for "figure out that we're right about longtermism so you can work on AI risk." The free money, retreats etc. can wind up feeling more like bribe... (read more)
For UK universities (I see a few have EA clubs) - it is really weird that student volunteers receive individual funding. I think this applies to US as well but can't be 100% sure:
UK student clubs fall under the banner of their respective student union, which is a charitable organisation to support the needs, interests and development of the students at the university. They have oversight of clubs, and a pot of money that clubs can access (i.e. they submit a budget for their running costs/events for the year and the union decides what is/isn't reasonable and what it can/can't fund). They also have a platform to promote all clubs through the union website, Freshers' week, university brochures, etc.
Some external organisations sponsor clubs. This is usually to make up 'gaps' in funding from the union e.g. If a bank wanted to fund a finance club so they can provide free caviar and wine at all events to encourage students to attend, in return for their logo appearing in club newsletters, this makes sense; the union would not be funding the 'caviar and wine' line item in the budget as this is not considered essential to supporting the running of the finance club as per the union's charita... (read more)
Is it actually bad if AI, longtermism, or x-risk are dominant in EA? That seems to crucially depend on whether these cause areas are actually the ones in which the most good can be done - and whether we should believe that depends on how strong arguments back up these cause areas. Assume, for example, that we can do by far the most good by focusing on AI x-risks and that there is an excellent case / compelling arguments for this. Then, this cause area should receive significantly more resources and should be much more talked about, and promoted, than other cause areas. Treating it just like other cause areas would be a big mistake: the (assumed) fact that we can do much more good in this cause area is a great reason to treat it differently!
To be clear: my point is not that AI, longtermism, or anything else should be dominant in EA, but that how these cause areas should be represented in EA (including whether they should be dominant) depends on the object-level discourse about their cost-effectiveness. It is therefore unobvious, and depends on difficult object-level questions, whether a given degree of dominance of AI, longtermism, or any other cause area, is justified or not. (I take this to be in tension with some points of the post, and some of the comments, but not as incompatible with most of its points.)
Sorry to hear that you've had this experience.
I think you've raised a really important point - in practice, cause prioritisation by individual EAs is heavily irrational, and is shaped by social dynamics, groupthink and deference to people who don't want people to be deferring to them. Eliminating this irrationality entirely is impossible, but we can still try to minimise it.
I think one problem we have is that it's true that cause prioritisation by orgs like 80000 Hours is more rational than many other communities aiming to make the world a better place. However, the bar here is extremely low, and I think some EAs (especially new EAs) see cause prioritisation by 80000 Hours as 100% rational. I think a better framing is to see their cause prioritisation as less irrational.
As someone who is not very involved with EA socially because of where I live, I'd also like to add that from the outside, there seems to be fairly strong, widespread consensus that EAs think AI Safety is the more important cause area. But then I've found that when I meet "core EAs", eg - people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I'd expect, and this consen... (read more)
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil's funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
First, I am sorry to hear about your experience. I am sympathetic to the idea that a high level of deference and lack of rigorous thinking is likely rampant amongst the university EA crowd, and I hope this is remedied. That said, I strongly disagree with your takeaways about funding and have some other reflections as well:
- "Being paid to run a college club is weird. All other college students volunteer to run their clubs."
... (read more)This seems incorrect. I used to feel this way, but I changed my mind because I noticed that every "serious" club (i.e., any club wanting to achieve its goals reliably) on my campus pays students or hires paid interns. For instance, my university has a well-established environmental science ecosystem, and at least two of the associated clubs are supported via some university funding mechanism (this is now so advanced that they also do grantmaking for student projects ranging from a couple thousand to a max of $100,000). I can also think of a few larger Christian groups on campus which do the same. Some computer science/data-related clubs also do this, but I might be wrong.
Most college clubs are indeed run on a volunteer basis. But most are run quite casually. T
I don't think most people should be doing cause prioritisation with 80000 Hours's level of rigour, but I think everyone is capable of doing some sort of cause prioritisation - at least working out where there values may differ with those of 80000 Hours, or identifying where they disagree with some of 80K's claims and working out how that would affect how they rank causes.
One data point to add in support: I once spoke to a relatively new EA who was part of a uni group who said they "should" believe that longtermism/AI safety is the top cause, but when I asked them what their actual prio was said it was mental health.
By "their actual prio", which of these do you think they meant (if any)?
I've sometimes had three different areas in mind for these three categories, and have struggled to talk about my own priorities as a result.
A combination of one and three, but hard to say exactly the boundaries. E.g. I think they thought it was the best cause area for themselves (and maybe people in their country) but not everyone globally or something.
I think they may not have really thought about two in-depth, because of the feeling that they "should" care about one and prioritize it, and appeared somewhat guilty or hesitant to share their actual views because they thought they would be judged. They mentioned having spoken to a bunch of others and feeling like that was what everyone else was saying.
It's possible they did think two though (it was a few years ago, so I'm not sure).
Your description of retreats matches my experience almost disconcertingly; it even described things I didn't even realize I took away from the retreat. I went to I felt like the only one who had those experiences. Thanks for writing this up. I hope things work out for you!
I've heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It's not surprising to me that most people converge on the most popular position within the broader movement.
Of course. I just think evaluating and deferring can look quite similar (and a mix of the two is usually taking place).
OP seems to believe students are deferring because of other frustrations. As many have quoted: "If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong".
I've attended Arete seminars at Ivy League universities and seen what looked liked fairly sophisticated evaluation to me.
Thank you for the post, as a new uni group organizer I'll take this into account.
I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an "intro" fellowship but the program discusses longtermism/x-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:
First, as Dave mentioned, some people may want to do good as much as possible but don't buy longtermism. We might lose these people who could do amazing good.
Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the "most important century" narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.
This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weir... (read more)
First, I’m sorry you’ve had this bad experience. I’m wary of creating environments that put a lot of pressure on young people to come to particular conclusions, and I’m bothered when AI Safety recruitment takes place in more isolated environments that minimize inferential distance because it means new people are not figuring it out for themselves.
I relate a lot to the feeling that AI Safety invaded as a cause without having to prove itself in a lot of the ways the other causes had to rigorously prove impact. No doubt it’s the highest prestige cause and attractive to think about (math, computer science, speculating about ginormous longterm impact) in many ways that global health or animal welfare stuff is often not. (You can even basically work on AI capabilities at a big fancy company while getting credit from EAs for doing the most important altruism in the world! There’s nothing like that for the other causes.)
Although I have my own ideas about some bad epistemics going on with prioritizing AI Safety, I want to hear your thoughts about it spelled out more. Is it mainly the deference you’re talking about?
Thanks so much for sharing your thoughts and reasons for disillusionment. I found this section the most concerning. If this has even a moderate amount of truth to it (especially the bit about discouraging new potential near termist EAs) then these kind of fellowships might need serious rethinking.
"Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers."
Thanks so much for writing this post Dave; I find this really helpful for pinning down some of the perceived and real issues with the EA community.
I think some people have two stable equilibria: one being ~“do normal things” and the other being “take ideas seriously” (obviously an oversimplification). I think getting from the former to the latter often requires some pressure, but the latter can be inhabited without sacrificing good epistemics and can be much more impactful. Plus, people who make this transition often end up grateful that they made it, and wish they’d made it earlier. I think other people basically don’t have these two stable equilibria, but some of those have an unstable equilibrium for taking ideas seriously which is epistemically unsound, and it becomes stable through social dynamics rather than by thinking through the ideas carefully, which is bad… but also potentially good for the world if they can do good work despite the unsound epistemic foundation… This is messy and I don’t straightforwardly endorse it, but I also can’t honestly say that it’s obvious to me we should always prioritize pure epistemic health if it trades off against impact here. Reducing “the ... (read more)
I'm not really an EA, but EA-adjacent. I am quite concerned about AI safety, and think it's probably the most important problem we're dealing with right now.
It sounds like your post is trying to point out some general issues in EA university groups, and you do point out specific dynamics that one can reasonably be concerned about. It does seem, however, like you do have an issue with the predominance of concerns around AI that is separate from this issue and that strongly shines through in the post. I find this dilutes your message and it might be better separated from the rest of your post.
To counter this, I'm also worried about AI safety despite having mostly withdrawn from EA, but I think the EA focus and discussion on AI safety is weird and bad, and people in EA get sold on specific ideas way too easily. Some examples for ideas that are common but I believe to be very shoddy: "most important century", "automatic doom from AGI", "AGI is likely to be developed in the next decade", "AGI would create superintelligence".
Hello,
I am sorry that this was your experience in your university group. I would also like to thank you for being bold and sharing your concerns because it will help make necessary changes to various groups who are having the same experience. This kind of effort is important because it will keep the priorities, actions and overall efforts of E.A groups in check.
There are some actions that my university facilitator took to help people "think better" about issues they are particular interested in and fall under the EA umbrellas (or would make the world a bet... (read more)
As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you're coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.
You could imagine two communities whose members in practice work on very similar things, but whose culture couldn't be further apart:
- Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynami
... (read more)Sorry to hear that you had such a rough experience.
I agree that there can be downsides of being too trapped within an EA bubble and it seems worthwhile suggesting to people that after spending some extended time in the bay, they may benefit from getting away from it for a bit.
Regarding retreats, I think it can be beneficial for facilitators to try to act similarly to philosophy lecturers who are there to ensure you understand the arguments for and against more than trying to get you to agree with them.
I also think that it would be possible to create an alt... (read more)
Kudos to you for having the courage to write this post. One of the things I like most about it is the uncanny understanding and acknowledgement of how people feel when they are trying to enter a new social group. EAs tend to focus on logic and rationality but humans are still emotional beings. I think perhaps we may underrate how these feelings drive our behavior. I didn't know that university organizers were paid - that, to me, seems kind of insane and counter to the spirit of altruism. I really like the idea of making it need based. One other thing your ... (read more)
Thank you for taking the time to write this. In 2020, I had the opportunity to start a city group or a university group in Cyprus, given the resources and connections at my disposal. After thinking long and hard about the homogenization of the group towards a certain cause area, I opted not to, but rather focused on being a facilitator for the virtual program, where I believe I will have more impact by introducing EA to newcomers from a more nuanced perspective. Facilitators of the virtual program have the ability to maintain a perfect balance between caus... (read more)
@Lizka Apologies is this was raised and answered elsewhere, but I just noticed in relation to this article that your reading estimate says 12 minutes, but when I press listen to this article it says 19 minutes at normal speed? Is there a reason for the discrepancy? How is the reading time calculated?
Also, when I tried to look for who else from the Forum team to tag - I don't find any obvious page/link that lists the current team members. How can I find this in the future?
Most people can read faster than they can talk, right? So 60% longer for the audio version than the predicted reading time seems reasonable to me?
"The moderation team
The current moderators (as of July 2023) are Lorenzo Buonanno, Victoria Brook, Will Aldred, Francis Burke, JP Addison, and Lizka Vaintrob (we will likely grow the team in the near future). Julia Wise, Ollie Base, Edo Arad, Ben West, and Aaron Gertler are on the moderation team as active advisors. The moderation team uses the email address forum-moderation@effectivealtruism.org. Please feel free to contact us with questions or feedback."
https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum#The_moderation_team
And there is also the online team:
https://www.centreforeffectivealtruism.org/team/
For questions like this I would use the intercom, read here how the team wants to get in contact:
https://forum.effectivealtruism.org/contact
I don't know the formula, but I think the reading time looks at the number of words and estimates how long someone would need to read this much text.
"The general adult population read 150 – 250 words per minute, while adults with college education read 200 – 300 words per minute. However, on average, adults read around 250 words per minute."
https://www.linkedin.com/pulse/how-fast-considered... (read more)