The graphs above indicate there is no consensus in the industry that AI is on the road to inevitable catastrophe, as the graphs themselves above indicate. The median researcher still considers the field a net positive. But let's set that aside for now and talk only about AI doomers.
Most researchers predicting bad outcomes also have high confidence that they themselves exiting the industry would do nothing to prevent bad outcomes. Somebody will get the bomb first. The day that person gets the bomb, unless you're a 100% doomer, humanity's chances are better the more "alignment-ready" the person who has it is, and if you had to pick an entity to receive the bomb today you could easily do worse than than OpenAI. At least they would have the good sense to turn it off before deciding to do anything else dumb with it.
The "other actors aren't here yet" argument is literally just a punt. They won't stay behind forever. When they catch up, we'll be in the exact same boat, except with the "overwhelming tech lead by institutions nominally pursuing alignment" solutions closed out of the book, a net loss in humanity's chances of survival however thin you think that slice is.
The notion that people simply haven't "taken a step back, reassessed, and rethought their position" ascribes a breathtaking amount of NPC-quality to people in the field, as if they somehow manage to believe in an impending apocalypse and still not spend any time thinking about it. The reason that doomers don't quit the industry themselves, a few deranged cultists aside, is the same reason people have never succeeded into baiting Yudkowsky into endorsing anti-AI terrorism: the impact you have on the final outcome is somewhere between net negative and net zero, as you will inevitably be replaced with another researcher who cares even less about alignment.
The only way quitting the industry actually leads to a better outcome is if you decide that the game is already 100% lost, you only want to maximize years of ignorant bliss before the fall, and you're certain that your competence is so high and your persuasive power so low relative to replacement that you personally quitting represents a net delay on the doom clock. The same argument applies, rescaled, to individual organizations themselves backing out of the game. There's a reason very few people think you can put the genie back in the bottle.
The graphs above indicate there is no consensus in the industry that AI is on the road to inevitable catastrophe, as the graphs themselves above indicate. The median researcher still considers the field a net positive. But let's set that aside for now and talk only about AI doomers.
Most researchers predicting bad outcomes also have high confidence that they themselves exiting the industry would do nothing to prevent bad outcomes. Somebody will get the bomb first. The day that person gets the bomb, unless you're a 100% doomer, humanity's chances are better the more "alignment-ready" the person who has it is, and if you had to pick an entity to receive the bomb today you could easily do worse than than OpenAI. At least they would have the good sense to turn it off before deciding to do anything else dumb with it.
The "other actors aren't here yet" argument is literally just a punt. They won't stay behind forever. When they catch up, we'll be in the exact same boat, except with the "overwhelming tech lead by institutions nominally pursuing alignment" solutions closed out of the book, a net loss in humanity's chances of survival however thin you think that slice is.
The notion that people simply haven't "taken a step back, reassessed, and rethought their position" ascribes a breathtaking amount of NPC-quality to people in the field, as if they somehow manage to believe in an impending apocalypse and still not spend any time thinking about it. The reason that doomers don't quit the industry themselves, a few deranged cultists aside, is the same reason people have never succeeded into baiting Yudkowsky into endorsing anti-AI terrorism: the impact you have on the final outcome is somewhere between net negative and net zero, as you will inevitably be replaced with another researcher who cares even less about alignment.
The only way quitting the industry actually leads to a better outcome is if you decide that the game is already 100% lost, you only want to maximize years of ignorant bliss before the fall, and you're certain that your competence is so high and your persuasive power so low relative to replacement that you personally quitting represents a net delay on the doom clock. The same argument applies, rescaled, to individual organizations themselves backing out of the game. There's a reason very few people think you can put the genie back in the bottle.