Hmm true, I gave it the whole Greater Wrong page of comments, maybe it just didn't quote from those for some reason.
fyi for anyone like me who doesn't have lots of the backstory here and doesn't want to read through Habryka's extensive corpus of EAF writings, here is Claude 3.7 Sonnet's summary based on the first page of comments Habryka links to.
Based on Habryka's posts, I can provide a summary of his key disagreements with EA leadership and forum administrators that ultimately led to his decision to leave the community.
Key Disagreements
- Leadership and Accountability: Habryka repeatedly expresses concern about what he sees as a "leaderless" EA community. He believes the community has shifted from being driven by independent intellectual contributors to being determined by "a closed-off set of leaders with little history of intellectual contributions." He argues that almost everyone who was historically in leadership positions has stepped back and abdicated their roles.
- Institutional Integrity: He criticizes EA organizations, particularly CEA (Centre for Effective Altruism), for prioritizing growth, prestige, and public image over intellectual integrity. In his posts, he describes personal experiences at CEA where they "deployed highly adversarial strategies" to maintain control over EA's public image and meaning.
- FTX Situation: Habryka was particularly critical of how EA leadership handled Sam Bankman-Fried (SBF) and FTX. He claims to have warned people about SBF's reputation for dishonesty, but these warnings were not heeded. He criticizes Will MacAskill and others for their continued endorsement of SBF despite red flags, and was frustrated by the lack of transparency and open discussion after FTX's collapse.
- Risk-Aversion and PR Focus: He repeatedly criticizes what he perceives as excessive risk-aversion and PR-mindedness among EA organizations. He argues this approach prevents honest discussion of important issues and contributes to a culture of conformity.
- Funding Centralization: Habryka expresses concern about EA funding being increasingly centralized through a single large foundation (likely referring to Open Philanthropy), arguing this concentration of resources creates unhealthy power dynamics.
- Community Culture: He criticizes the shift in EA culture away from what he describes as "a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo" toward something more institutional and conformist.
- Failure to Create Change: Habryka states that he no longer sees "a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community," particularly in domains like AI safety research and community governance.
His departure post suggests a deep disillusionment with the direction of the EA community, expressing that while many of the principles of EA remain important, he believes "EA at large is causing large harm for the world" with "no leadership or accountability in-place to fix it." He recommends others avoid posting on the EA Forum as well, directing them to alternatives like LessWrong.
I think I would have found this more interesting/informative if the scenarios (or other key parts of the analysis) came with quantitative forecasts. I realise of course this is hard, but without this I feel like we are left with many things being 'plausible'. And then do seven "plausible"s sum to make a "likely"? Hard to say! That said, I think this could be a useful intro to arguments for short timelines to people without much familiarity with this discourse.
Good points, I agree with this, trends 1 and 3 seem especially important to me. As you note though the competitive (and safety) reasons for secrecy and research automation probably dominate.
Another thing that current trends in AI progress means though is that it seems (far) less likely that the first AGIs will be brain emulations. This in turn makes it less likely AIs will be moral patients (I think). Which I am inclined to think is good, at least until we are wise and careful enough to create flourishing digital minds.
Two quibbles:
My sense is that of the many EAs who have taken EtG jobs quite a few have remained fairly value-aligned? I don't have any data on this and am just going on vibes, but I would guess significantly more than 10%. Which is some reason to think the same would be the case for AI companies. Though plausibly the finance company's values are only orthogonal to EA, while the AI company's values (or at least plans) might be more directly opposed.
The comment that Ajeya is replying to is this one from Ryan, who says his timelines are roughly the geometric mean of Ajeya's and Daniel's original views in the post. That is sqrt(4*13) = 7.2 years from the time of the post, so roughly 6 years from now.
As Josh says, the timelines in the original post were answering the question "Median Estimate for when 99% of currently fully remote jobs will be automatable".
So I think it was a fair summary of Ajeya's comment.
Good point, I agree that ideally that would be the case, but my impression (from the outside) is that OP is somewhat capacity-constrained, especially for technical AI grantmaking? Which I think would mean if non-OP people feel like they can make useful grants now that could still be more valuable given the likelihood that OP scales up and gets more AI grantmaking in coming years. But all that is speculation, I haven't thought carefully about the value of donations over time, beyond deciding to not save all my donations for later for me personally.
I suppose it depends whether the counterfactual is the two parties to the bet donate the 10k to their preferred causes now, or donate the 10k inflation adjusted in 2029, or don't donate it at all. Insofar as we think donations now are better (especially for someone who has short AI timelines) there might be a big difference between the value of money now vs the value of money after (hypothetically) winning the bet.
Pablo and I were trying to summarise the top page of Habryka's comments that he linked to (~13k words) not this departure post itself.