You seem to be lumping people like Richard Ngo, who is fairly epistemically humble, in with people who are absolutely sure that the default path leads to us all dying. It is only the latter that I'm criticizing.
I agree that AI poses an existential risk, in the sense that it is hard to rule out that the default path poses a serious chance of the end of civilization. That's why I work on this problem full-time.
I do not agree that it is absolutely clear that default instrumental goals of an AGI entail it killing literally everyone, as the OP asserts.
(I provide some links to views dissenting from this extreme confidence here.)
To be clear, mostly I'm not asking for "more work", I'm asking people to use much better epistemic hygiene. I did use the phrase "work much harder on its epistemic standards", but by this I mean please don't make sweeping, confident claims as if they are settled fact when there's informed disagreement on those subjects.
Nevertheless, some examples of the sort of informed disagreement I'm referring to:
Notably, the extreme doomer contingent has largely failed even to understand, never mind engage with, some of these arguments, frequently lazily pattern-matching and misrepresenting them as more basic misconceptions. A typical example is thinking Matthew Barnett and I have been saying that GPT understanding human values is evidence against the MIRI/doomer worldview (after all, "the AI knows what you want but does not care, as we've said all along"), when in fact we're saying there's evidence we have actually pointed GPT successfully at those values.
It's fine if you have a different viewpoint. Just don't express that viewpoint as if it's self-evidently right when there's serious disagreement on the matter among informed, thoughtful people. An article like the OP which claims that labs should shut down should at least try to engage with the views of someone who thinks the labs should not shut down, and not just pretend such people are fools unworthy of mention.
These essays are well known and I'm aware of basically all of them. I deny that there's a consensus on the topic, that the essays you link are representative of the range of careful thought on the matter, or that the arguments in these essays are anywhere near rigorous enough to meet my criterion: justifying the degree of confidence expressed in the OP (and some of the posts you link).
I’ll go further and say that I think those two claims are widely believed by many in the AI safety world (in which I count myself) with a degree of confidence that goes way beyond what can be justified by any argument that has been provided by anyone, anywhere, and I think this is a huge epistemic failure of that part of the AI safety community.
I strongly downvoted the OP for making these broad, sweeping, controversial claims as if they are established fact and obviously correct, as opposed to one possible way the world could be which requires good arguments to establish, and not attempting any serious understanding of and engagement with the viewpoints of people who disagree that these organizations shutting down would be the best thing for the world.
I would like the AI safety community to work much harder on its epistemic standards.
Another easy thing you can do, which I did several years ago, is download Kiwix onto your phone, which allows you to save offline versions of references such as Wikipedia, WikiHow, and way, way more. Then also buy a solar-powered or hand-crank USB charger (often built into disaster radios such as this one which I purchased).
For extra credit, store this data on an old phone you no longer use, and keep that and the disaster radio in a Faraday bag.
Oh lol, thanks for explaining! Sorry for misunderstanding you. (It's a pretty amusing misunderstanding though, I think you'd agree.)