Bio

Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
349

Like Ian Turner I ended up disagreeing and not downvoting (I appreciate the work Vasco puts into his posts).

The shortest answer is that I find the "Meat Eater Problem" repugnant and indicitative of defective moral reasoning that, if applied at scale, would lead to great moral harm.[1]

I don't want to write a super long comment, but my overall feelings on the matter have not changed since this topic came up on the Forum. In fact, I'd say that one of the leading reasons I consider myself drastically less 'EA' since the last ~6 months have gone by is the seeming embrace of the "Meat-Eater Problem" inbuilt into both the EA Community and its core ideas, or at least the more 'naïve utilitarian' end of things. To me, Vasco's bottom line result isn't an argument that we should prevent children dying of malnutrition or suffering with malaria because of these second-order effects.

Instead, naïve hedonistic utilitarians should be asking themselves: If the rule you followed brought you to this, of what use was the rule?

  1. ^

    I also agree factory farming is terrible. I just want to find pareto solutions that reduce needless animal suffering and increase human flourishing.

Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
 

Best Forum Post I read this year:

Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal 

It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here

This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏

Honourable Mentions:

  • Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a "legitimacy problem".[1] As such, I think Richard's call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the 'vibe shift' in Silicon Valley as a chaser.
  • On Owning Our EA Affiliation by @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of "do the opposite". She's careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
  • Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKB🔸: EA Megaprojects are BACK baby! More seriously, this post people had the most 'blow my mind' effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and I'm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.

Forum Posters of the Year:

  • @Vasco Grilo🔸 - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forum's current postchild of 'calculate all the things' EA. I think this year he's been an awesome presence on the Forum, and long may it continue.
  • @Matthew_Barnett - Matthew is somewhat of an engima to me ideologically, there have been many cases where I've read a position of his and gone "no that can't be right". Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.

Non-Forum Poasters of the Year:

  • Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means it's not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
  • Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if you're also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider 'AGI Twitter', including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and I've never (or rarely) seen him get that angry on the platform, which might even deserve another award!

Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.

My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!

  1. ^

    I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy

  2. ^

    e.g. here and here

Yeah I could have worded this better. What I mean to say is that I expect that the tags 'Criticism of EA' and 'Community' probably co-occur in posts a lot more than two randomly drawn tags, and probably rank quite high on the pairwise ranking. I don't mean to say that it's a necessary connection or should always be the case, but it does mean that downweighting Community posts will disproportionately downweight Criticism posts.

If I'm right, that is! I can probably scrape the data from 23-24 on the Forum to actually answer this question.

Just flagging this for context of readers, I think Habryka's position/reading makes more sense if you view it in the context of an ongoing Cold War between Good Ventures and Lightcone.[1]

Some evidence on the GV side:

To Habryka's credit, it's much easier to see what the 'Lightcone Ecosystem' thinks of OpenPhil!

  • He thinks that the actions of GV/OP were and currently are overall bad for the world.
  • I think the reason why is mostly given here by MichaelDickens on LW, Habryka adds some more concerns in the comments. My sense is that the LW commentariat is turning increasingly against OP but that's just a vibe I have when skim-reading.
  • Some of it also appears to be for reasons to do with the Lightcone-aversion-to-"deception"-broadly-defined, which one can see from the Habryka's reasoning in this post or replying here to Luke Muehlhauser. This philosophy doesn't seem to explained in one place, I've only gleaned what I can from various posts/comments so if someone does have a clearer example then feel free to point me in that direction.
  • This great comment during the Nonlinear saga I think helps make a lot of Lightcone v OP discourse make sense.

I was nervous about writing this because I don't want to start a massive flame war, but I think it's helpful for the EA Community to be aware that two powerful forces in it/adjacent to it[2] are essentially in a period of conflict. When you see comments from either side that seem to be more aggressive/hostile than you otherwise might think warranted, this may make the behaviour make more sense.

  1. ^

    Note: I don't personally know any of the people involved, and live half a world away, so expect it to be very inaccurate. Still, this 'frame' has helped me to try to grasp what I see as behaviours and attitudes which otherwise seem hard to explain to me, as an outsider to the 'EA/LW in the Bay' scene.

  2. ^

    To my understanding, the Lightcone position on EA is that it 'should be disavowed and dismantled' but there's no denying the Lightcone is closer to EA than ~most all other organisations in some sense

First, I want to say thanks for this explanation. It was both timely and insightful (I had no idea about the LLM screening, for instance). So wanted to give that a big 👍

I think something Jan is pointing to (and correct me if I'm wrong @Jan_Kulveit) is that because the default Community tag does downweight the visibility and coverage of a post, it could be implicitly used to deter engagement from certain posts. Indeed, my understanding was that this was pretty much exactly the case, and was driven by a desire to reduce Forum engagement on 'Community' issues in the wake of FTX. See for example:

Now, it is also true that I think the Forum was broadly supportive about this at the time. People were exhausted by FTX, and there seemed like there was a new devasting EA scandal every week, and being able to downweight these discussions and focus on 'real' EA causes was understandably very popular.[1] So it wasn't even necessarily a nefarious change, it was responding to user demand. 

Nevertheless I think, especially since criticisms of EA also come with the 'Community' tag attached,[2] it has also had the effect of somewhat reducing criticism and community sense-making. In retrospect, I still feel like the damage wrought by FTX hasn't had a full accounting, and the change to down-weight Community posts was trying to solve the 'symptoms' rather than the underling issues.

  1. ^

    I think reading the most popular comments on the linked posts supports this.

  2. ^

    Willing to change my mind on this is there's much less of an overlap between the two than other major categories, for instance

Sharing some planned Forum posts I'm considering, mostly as a commitment device, but welcome thoughts from others:

  • I plan to add another post in my "EA EDA" sequence analysing Forum trends in 2024. My pre-registered prediction is that we'll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
  • I'll also try to do another end-of-year Forum awards post (see here for last year's) though with slightly different categories.
  • I'm working on an analysis of EA's post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
  • I recently finished reading former US General Stanley McChrystal's book: Team of Teams. Ostensibly it's a book about his command of JSOC in the Iraq War, but it's really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what "Third Wave" EA might mean). This one is a stretch though, I'm not sure how interested the Forum would be for this, or whether it would be the right place to post it.

My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, I've come to view "Alignment" primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]

As such, I'm considering doing a deep-dive on the Apollo o1 report given the controversial reception it's had.[3] I think this is the most unlikely one though, as I'd want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a "stretch goal".

Finally, I don't expect to devote much more time[4] to adding to the "Criticism of EA Criticism" sequence. I often finish the posts well after the initial discourse has died down, and I'm not sure what effect they really have.[5] Furthermore, and I've started to notice my own views of a variety of topics start to diverge from "EA Orthodoxy", so I'm not really sure I'd make a good defender. This change may itself warrant a future post, though again I'm not committing to that yet.

  1. ^

    Which I will rename

  2. ^

    It possibly may be more helpful for those without technical backgrounds concerned about AI, but I'm not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I don't want to claim that. I'm very uncertain about the future of AI and could easily see myself being convinced to change my mind.

  3. ^

    I'm slightly leaning towards the skeptical interpretation myself, as you might have guessed

  4. ^

    if any at all, unless an absolutely egregious but widely-shared example comes up

  5. ^

    Does Martin Sandbu read the EA Forum, for instance?

I think this is, to a significant extent, definitionally impossible with longtermist interventions, because the 'long-term' part excludes having an empirical feedback loop quick enough to update our models of the world.

For example, if I'm curious about whether malaria net distribution or vitamin A supplementation is more 'cost-effective' than another, I can fund interventions and run RCTs, and then model the resulting impact according to some metric like the DALY. This isn't cast-iron secure evidence, but it is at least causally connected to the result I care about.

For interventions that target the long-run future of humanity, this is impossible. We can't run counterfactuals of the future or past, and I at least can't wait 1000 years to see the long-term impact of certain decisions on the civilizational trajectory of the world. Thus, any longtermist intervention cannot really get empirical feedback on the parameters of action, and mostly rely on subjective human judgement about them.

To their credit, the EA Long-Term Future Fund says as much on their own web page:

Unfortunately, there is no robust way of knowing whether succeeding on these proxy measures will cause an improvement to the long-term future.

For similar thoughts, see Laura Duffy's thread on empirical vs reason-driven EA

One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.

I guess this isn't necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/neglectedness and importance/impact of a cause area and charity. Not saying you're wrong, but it's not necessarily a problem.

Furthermore, my anecdotal take from the voting patterns as well as the comments on the discussion thread seem to indicate that neglectedness is often high on the mind of voters - though I admit that commenters on that thread are a biased sample of all those voting in the election.

It can be a bit underwhelming if an experiment to try to get the crowd's takes on charities winds up determining to, "just let the current few experts figure it out." 

Is it underwhelming? I guess if you want the donation election to be about spurring lots of donations to small, spunky EA-startups working in weird-er cause areas, it might be, but I don't think that's what I understand the intention of the experiment to be (though I could be wrong). 

My take is that the election is an experiment with EA democratisation, where we get to see what the community values when we do a roughly 1-person-1-ballot system instead of those-with-the-moeny decide system which is how things work right now. Those takeaways seem to be:

  • The broad EA community values Animal Welfare a lot more than the current major funders
  • The broad EA community sees value in all 3 of the 'big cause areas' with high-scoring charities in Animal Welfare, AI Safety, and Global Health & Development.

But you haven't provided any data 🤷

Like you could explain why you think so without de-anonymising yourself, e.g. sammy shouldn't put EA on his CV in US policy because:

  • Republicans are in control of most positions and they see EA as heavily democrat-coded and aren't willing to consider hiring people with it
  • The intelligentsia who hire for most US policy positions see EA as cult-like and/or disgraced after FTX
  • People won't understand what EA is on a CV will and discount sammy's chances compared to them putting down "ran a discussion group at university" or something like that
  • You think EA is doomed/likely to collapse and sammy should pre-emptively dissasociate their career from it

Like I feel that would be interesting and useful to hear your perspective on, to the extend you can share information about it. Otherwise just jumping in with strong (and controversial?) opinions from anonymous accounts on the forum just serves to pollute the epistemic commons in my opinion.

Right but I don't know who you are, or what your position in the US Policy Sphere is, if you have one at all. I have no way to verify your potential background or the veracity of the information you share, which is one of the major problems with anonymous accounts.

You may be correct (though again that lack of explanation doesn't help give detail or a mechanism why or help sammy that much, as you said it depends on the section) but that isn't really the point, the only data point you provide is "intentionally anonymous person of the EAForum states opinion without supporting explanations" which is honestly pretty weak sauce

Load more