[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.]
In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.
In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines:
- Those working on AI Safety, because they believe that transformative AI is coming.
- Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1]
Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that?
If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources.
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time.
- ^
Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
Point 1: Broad agreement with a version of the original post's argument
Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI.
For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals.
I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.
Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI
Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:
To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world.
Cheers, and thanks for the thoughtful post! :)
I'm not sure that the observable trends in current AI capabilities definitely point to an almost-certainty of TAI. I love using the latest LLMs, I find them amazing, and I do find it plausible that next-gen models, plus making them more agent-like, might be amazing (and scary). And I find it very, very plausible to imagine big productivity boosts in knowledge work. But the claim that this will almost-certainly lead to a rapid and complete economic/scientific transformation still feels at least a bit speculative, to me, I think...