Is your last point meant to be AGI specific or not? I feel like it would be relatively easy to get non-zero evidence that there was a risk of everyone dying from a full nuclear exchange: you'd just need some really good modelling of the atmospheric effects that suggested a sufficiently bad nuclear winter, where the assumptions of the model themselves were ultimately traceable to good empirical evidence. Similarly for climate change being an X-risk. Sure, even good modelling can be wrong, but unless you reject climate modelling entirely, and are totally agnostic about what will happen to world temperature by 2100, I don't see how there could be an in-principle barrier here. I'm not saying we in fact have evidence that there is a significant X-risk from nuclear war or climate change, just that we could; nothing about "the future is hard to predict" precludes it.
.
It's a genuine problem (that is, evidence against) the sort of utilitarian and consequentialist views common in EA, that they in principle justify killing arbitrary numbers of innocents, if they are replaced by people with better lives (or for that matter enough people with worse-but-net positive lives; the latter view is one reason total utilitarianism isn't *really* that similar to fascist ideas about master races in my view). It's not surprising this reminds people of historical genocides carried out under racist justifications, though in itself it implies nothing about one human ethnic group being better than another. The problem is particularly serious in my view, because:
A) If your theory gives any weight to creating happy people being good and thinks that a gain in goodness of a large enough amount can outweigh any deontic constraint on murder and other bad actions, then you will get some (hypothetical, unrealistic cases) where killing to replace is morally right. It's hard to think of any EA moral philosophy that doesn't think that sometimes deontic constraints can be overridden if the stakes are high enough. Though of course there are EAs who reject the view that creating happy people is good rather than neutral.
B) Whilst I struggle to imagine a realistic situation where actual murder looks like the best course of action from a utilitarian perspective, there is a fairly closely related problem involving AI safety and total utilitarianism. Pushing ahead with AI-at least if you believe the arguments for AI X-risk*, carries some chance that everyone will be suddenly murdered. But delaying highly advanced AI, carries some risk we will never reach highly advanced AI, with concomitant loss of a large future population of humans and digital minds. (If nothing else, there could be a thermonuclear war that collapses civilization, and then we could fail to re-industrialize because we've used up the most easily accessible fossil fuels.) Thinking through how much risk of being suddenly murdered it is okay to impose on the public (again, assuming you buy the arguments for AI X-risk), involves deciding how to weigh the interests of current people against the interests of potential future people. It's a problem if our best theories for doing that look like they'd justify atrocity in imagined cases, and imposing counter-intuitively and publicly unacceptable levels or risk in the actual case. (Of course, advanced AI might also bring big benefits to currently existing people, so it's more complicated than just "bad for us, good for the future".)
It's probably more worthwhile for people to (continue) think(ing)** through the problems for the moral theories here, than to get wound-up about the fact that some American left-wingers sometimes characterize us unfairly in the media. (And I do agree that people like Torres and Gebru sometimes say stuff about us that is false or misleading.)
*I am skeptical, but enough people in AI are worried that I think it'd be wrong to be sure enough that there's no X-risk to just ignore it.
**I'm well aware a lot of hard thought has gone into this already.
.
'...the thought of a superpowerful AI that shares the value system of e.g. LessWrong is slightly terrifying to me.'
Old post, but I've meant to say this for several months: Whilst I am not a fan of Yudkowsky, I do think that his stuff about this showed a fair amount of sensitivity to the idea that it would be unfair if a particular group of people just programed their values into the AI, taking no heed of the fact that humans disagree. (Not that that means there is no reason to worry about the proposal to build a "good" AI that runs everything).
His original (since abandoned I think) proposal, was that we would get the AI to have goal like 'maximizes things all or pretty much all fully informed humans would agree are good, minimizes things all or almost all fully informed would humans agree are bad, and where humans would disagree on whether something is good or bad even after being fully informed of all relevant facts, try and minimize your impact on that thing, and leave it up to humans to sort out amongst themselves.' (Not an exact rendition, but close enough for present purposes.) Of course, there's a sense in which that still embodies liberal democratic values about what is fair, but I'm guessing if your a contemporary person with a humanities degree, you probably share those very broad and abstract values.
This sounds kind of plausible to me, but couldn't you equally say that you'd expect autism to be mitigating factor to cultiness, because cults are about conformity to group shibboleths for social reasons which autistic people are better at avoiding ? (At least, I'd have thought we are. Maybe only I have that perception?) That kind of makes me think that actually it is just easy to generate plausible-sounding hypothesis about the effects of a fairly broad and nebulous thing like "autistic traits" and maybe none of them should be taken that seriously without statistical evidence to back them up.
There are probably good proxies for climate effects though: i.e. reductions in more measurable stuff, so I think the situation is no that analogous to AI. And some global health and development stuff involves things where the outcome we actually care about is hard to measure: i.e. Deworming and it's possible positive effects on later earnings, and presumably well-being. We know deworming gets rid of worms, but the literature on the benefits of this is famously contentious.