JWS 🔸

3758 karmaJoined

Bio

Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
311

No really appreciated it your perspective, both on SMA and what we mean when we talk about 'EA'. Definitely has given me some good for thought :)

Feels like you've slightly misunderstood my point of view here Lorenzo? Maybe that's on me for not communicating it clearly enough though.

For what it's worth, Rutger has been donating 10% to effective charities for a while and has advocated for the GWWC pledge many times...So I don't think he's against that, and lots of people have taken the 10% pledge specifically because of his advocacy

That's great! Sounds like very 'EA' to me 🤷

I think this mixes effective altruism ideals/goals (which everyone agrees with) with EA's specific implementation, movement, culture and community.

I'm not sure everyone does agree really, some people have foundational moral differences. But that aside, I think effective altruism is best understand as a set of ideas/ideals/goals. I've been arguing that on the Forum for a while and will continue to do so. So I don't think I'm mixing, I think that the critics are mixing.

This doesn't mean that they're not pointing out very real problems with the movement/community. I still strongly think that the movement has lot of growing pains/reforms/recknonings to go through before we can heal the damage of FTX and onwards.

The 'win by ippon' was just a jokey reference to Michael Nielsen's 'EA judo' phrase, not me advocating for soldier over scout mindset.

If we want millions of people to e.g. give effectively, I think we need to have multiple "movements", "flavours" or "interpretations" of EA projects.

I completely agree! Like 100000% agree! But that's still 'EA'? I just don't understand trying to draw such a big distinction between SMA and EA in the case where they reference a lot of the same underlying ideas.

So I don't know, feels like we're violently agreeing here or something? I didn't mean to suggest anything otherwise in my original comment, and I even edited it to make it more clear I was more frustrated at the interviewer than anything Rutger said or did (it's possible that a lot of the non-quoted phrasing were put in his mouth)

Just a general note, I think adding some framing of the piece, maybe key quotes, and perhaps your own thoughts as well would improve this from a bare link-post? As for the post itself:

It seems Bregman views EA as:

a misguided movement that sought to weaponize the country’s capitalist engines to protect the planet and the human race

Not really sure how donating ~10% of my income to Global Health and Animal Welfare charities matches that framework tbqh. But yeah 'weaponize' is highly aggressive language here, if you take it out there's not much wrong with it. Maybe Rutger or the interviewer think Capitalism is inherently bad or something?

effective altruism encourages talented, ambitious young people to embrace their inner capitalist, maximize profits, and then donate those profits to accomplish the maximum amount of good.

Are we really doing the earn-to-give thing again here? But like apart from the snark there isn't really an argument here, apart from again implicitly associating capitalism with badness. EA people have also warned about the dangers of maximisation before, so this isn't unknown to the movement.

Bregman saw EA’s demise long before the downfall of the movement’s poster child, Sam Bankman-Fried

Is this implying that EA is dead (news to me) or that is in terminal decline (arguable, but knowledge of the future is difficult etc etc)?

he [Rutger] says the movement [EA] ultimately “always felt like moral blackmailing to me: you’re immoral if you don’t save the proverbial child. We’re trying to build a movement that’s grounded not in guilt but enthusiasm, compassion, and problem-solving.

I mean, this doesn't sound like an argument against EA or EA ideas? It's perhaps why Rutger felt put off by the movement, but then if you want a movement based on 'enthusiasm, compassion, and problem-solving' (which are still very EA traits to me, btw), then that's because it would be doing more good, rather than a movement wracked by guilt. This just falls victim to classic EA Judo, we win by ippon.

I don't know, maybe Rutger has written up more of his criticism somewhere more thoroughly. Feel like this article is such a weak summary of it though, and just leaves me feeling frustrated. And in a bunch of places, it's really EA! See:

  • Using Rob Mather founding AMF as a case study (and who has a better EA story than AMF?)
  • Pointing towards reducing consumption of animals via less meat-eating
  • Even explicitly admires EA's support for "non-profit charity entrepreneurship"

So where's the EA hate coming from? I think 'EA hate' is too strong and is mostly/actually coming from the interviewer, maybe more than Rutger. Seems Rutger is very disillusioned with the state of EA, but many EAs feel that way too! Pinging @Rutger Bregman or anyone else from the EA Netherlands scene for thoughts, comments, and responses.

With existential risk from unaligned AI, I don't think anyone has ever told a very clear story about how AI will actually get misaligned, get loose, and kill everyone. 

This should be evidence against AI x-risk![1] Even in the atmospheric ignition case in Trinity, they had more concrete models to use. If we can't build a concrete model here, then it implies we don't have a concrete/convincing case for why it should be prioritised at all, imo. It's similar to the point in my footnotes that you need to argue for both p and p->q, not just the latter. This is what I would expect to see if the case for p was unconvincing/incorrect.

I don't think this is a problem: we shouldn't expect to know all the details of how things go wrong in advance

Yeah I agree with this. But the uncertainty and cluelessness in the future should decrease one's confidence that they're working on the most important thing in the history of humanity, one would think.

and it is worthwhile to do a lot of preparatory research that might be helpful so that we're not fumbling through basic things during a critical period. I think the same applies to digital minds.

I'm all in favour of research, but how much should that research get funded? Can it be justified above other potential uses of money and general resource? Should it be an EA priority as defined by the AWDW framing? These we (almost) entirely unargued for.

  1. ^

    Not dispositive evidence perhaps, but a consideration

It also seems like you're mostly critiquing the tractability of the claim and not the underlying scale nor neglectedness?

Yep, everyone agrees it's neglected. My strongest critique is the tractability, which may be so low as to discount astronomical value. I do take a lot of issue with the scale as well though. I think that needs to be argued for rather than assumed. I also think trade-offs from other causes need to be taken into account at some point too.

And again, I don't think there's no arguments that can make traction on the scale/tractability that can make AI Welfare look like a valuable cause, but these arguments clearly weren't made (imho) in AWDW

I don't quite know what to respond here.[1] If the aim was to discuss something differently then I guess there should have been a different debate prompt? Or maybe it shouldn't have been framed as a debate at all? Maybe it should have just prioritised AI Welfare as a topic and left it at that. I'd certainly have less of an issue with the posts that were were that have happened, and certainly wouldn't have been confused by the voting if there wasn't a voting slider.[2]

  1. ^

    So I probably won't - we seem to have strong differing intuitions and intepretations of fact, which probably makes communication difficult

  2. ^

    But I liked the voting slider, it was a cool feature!

Thanks for extensive reply Derek :)

Even if you think that AI welfare is important (which I do!), the field doesn't have the existing talent pipelines or clear strategy to absorb $50 million in new funding each year.

Yep completely agree here, and as Siebe pointed out I did got to the extreme end of 'make the changes right now'. It could be structured in more gradual way, and potential from more external funding.

The fact that something might have a huge scale and we might be able to do something about it is enough for it to be taken seriously and provides prima facie evidence that it should be a priority. 

I agree in principle on the huge scale point, but much less so the 'might be able to do something'. I think we need a lot more than that, we need something tractable to get going, especially for something to be considered a priority. I think the general form of argument I've seen this week is that AI Welfare could have a huge scale, therefore it should be an EA priority without much to flesh out the 'do something' part.

AI persons (or things that look like AI persons) could easily be here in the next decade...AI people (of some form or other) are not exactly a purely hypothetical technology, 

I think I disagree empirically here. Counterfeit "people" might be here soon, but I am not moved much by arguments that digital 'life' with full agency, self-awareness, autopoiesis, moral values, moral patienhood etc will be here in the next decade. Especially not easily here. I definitely think that case hasn't been made, and I think (contra Chris in the other thread) that claims of this sort should have been made much more strongly during AWDW.

We might have that opportunity now with AI welfare. Perhaps this means that we only need a small core group, but I do think some people should make it a priority.

Some small people should, I agree. Funding Jeff Sebo and Rob Long? Sounds great. Giving them 438 research assistants and $49M in funding taken from other EA causes? Hell to the naw. We weren't discussing whether AI Welfare should be a priority for some EAs, we were discussing specific terms set out in the week's statement, and I feel like I'm the only person during this week who paid any attention to them.

Secondly, the 'we might have that opportunity' is very unconving to me. It's the same convingness to me of saying in 2008 that '"If CERN is turned on, it make create a black hole that destroys the world. Nobody else is listening. We might only have the opportunity to act now!" It's just not enough to be action-guiding in my opinion.

I'm pretty aware the above is unfair to strong advocates of AI Safety and AI Welfare, but at the moment that's where the quality of arguments this week have roughly stood from my viewpoint.

I think it’s very valuable for you to state what the proposition would mean in concrete terms.

It's not just concrete terms, it's the terms we've all agreed to vote on for the past week!

On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.

I think I just strongly disagree on this point. Not every post has to re-argue everything from the ground up, but I think every post does need at least a link or backing to why it believes that. Are people anchoring on Shulman/Cotra? Metaculus? Cold Takes? General feelings about AI progress? Drawing lines on graphs? Specific claims about the future that making reference only to scaled-up transformer models? These are all very different claims for the proposition, and differ in terms of types of AI, timelines, etc. 

AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.

If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.

I again disagree, for two slightly different reasons:

  1. I'm not sure how good the discussion has been about AI Safety. How much have these questions and cruxes actually been internalised? Titotal's excellent series on AI risk scepticism has been under-discussed in my opinion. There are many anecdotal cases of EAs (especially younger, newer ones) simply accepting the importance of AI causes through deference alone.[1] At the latest EAG London, when I talked about AI risk skepticism I found surprising amounts of agreement with my positions even amongst well-known people working in the field of AI risk. There was certainly an interpretation that the Bay/AI-focused wing of EA weren't interested in discussing this at all.
  2. Even if something is consensus, it should still be allowed (even encouraged) to be questioned. If EA wants to spend lots of money on AI Welfare (or even AI Safety), it should be very sure that it is one of the best ways we can impact the world. I'd like to see more explicit red-teaming of this in the community, beyond just Garfinkel on the 80k podcast.

 

  1. ^

    I also met a young uni organiser who was torn about AI risk, since they didn't really seem to be convinced of it but felt somewhat trapped by the pressure they felt to 'towe the EA line' on this issue

Seems needlessly provocative as a title, and almost purposefully designed to generate more heat than light in the resulting discussion.

I think I'd rather talk about the important topic even if it's harder? My concern is, for example, that the debate happens and let's say people agree and start to pressure for moving $ from GHD to AW. But this ignores a third option, move $ from 'longtermist' work to fund both.

Feels like this is a 'looking under the streetlight because it's easier effect' kind of phenomenon.

If Longtermist/AI Safety work can't even to begin to cash out measurable incomes that should be a strong case against it. This is EA, we want the things we're funding to be effective.

Load more