Thank you for reading and for your insightful reply!
I think you've correctly pointed out one of the cruxes of the argument: That humans have average "quality of sentience" as you put it. In your analogous examples (except for the last one), we have a lot of evidence to compare things too. We can say with relative confidence where our genetic line or academic research stands in relation to what might replace it because we can measure what average genes or research is like.
So far, we don't have this ability for alien life. If we start updating our estimation of the number of alien life forms in our galaxy, their "moral characteristics," whatever that might mean, will be very important for the reasons you point out.
Thank you for reading and for your detailed comment. In general I would agree that my post is not a neutral survey of the VWH but a critical response, and I think I made that clear in the introduction even if I did not call it red-teaming explicitly.
I'd like to respond to some of the points you make.
Bostrom may have talked about this elsewhere since I've heard other people say this, but he doesn't make this point in the paper. He only mentions AI briefly as a tool the panopticon government could use to analyze the video and audio coming in from their surveillance. He also says:
"Being even further removed from individuals and culturally cohesive ‘peoples’ than are typical state governments, such an institution might by some be perceived as less legitimate, and it may be more susceptible to agency problems such as bureaucratic sclerosis or political drift away from the public interest."
He also considers what might be required for a global state to bring other world governments to heel. So I don't think he is assuming that the state can completely ignore all dissent or resistance because it FOOMs into an all powerful AI.
Either way I think that is a really bad argument. It's basically just saying "if we had aligned superintelligence running the world everything would be fine" which is almost tautologically true. But what are we supposed to conclude from that? I don't think that tells us anything about increasing state power on the margin. Also, aligning the interests of powerful AI with a powerful global state is not sufficient for alignment of AI with humanity more generally. Powerful global states are not very well aligned with the interests of their constituents.
My reading is that Bostrom is making arguments about how human governance would need to change to address risks from some types of technology. The arguments aren't explicitly contingent on any AI technology that isn't available today.
Bostrom says in the policy recommendations:
"Some areas, such as synthetic biology, could produce a discovery that suddenly democratizes mass destruction, e.g. by empowering individuals to kill hundreds of millions of people using readily available materials. In order for civilization to have a general capacity to deal with “black ball” inventions of this type, it would need a system of ubiquitous real-time worldwide surveillance. In some scenarios, such a system would need to be in place before the technology is invented."
So if we assume that some black balls like this are in the urn which I do in the essay, this is a position that Bostrom explicitly advocates, not just one which he analyzes. But even assuming that the VWH is true and a technology like this does exist, I don't think this policy recommendation is helpful.
State enforced "ubiquitous real-time worldwide surveillance" is neither a necessary nor sufficient technology to address a type-1 vulnerability like this unless the definition of type-1 vulnerability trivially assumes that it is. Advanced technology that democratizes protection like vaccines, PPE, or drugs can alleviate a risk like this, so a panopticon is not necessary. A state with ubiquitous surveillance need not stop pandemics to stay rich and powerful and indeed may create them to keep their position.
Even if we knew a black ball was coming, setting up a panopticon would probably do more harm than good, and it certainly would if we didn't come up with any new ways of aligning and constraining state power. I don't think Bostrom would agree with that statement but that is what I defend in the essay. Do you think Bostrom would agree with that on your reading of the VWH?
This might be the best strategy if we're all eventually doomed. Although it might turn out that the tech required to colonize planets comes after a bunch of black balls. At least like nuclear rockets and some bio-tech stuff seems likely.
Even Bostrom doesn't think we're inevitably doomed though. He just thinks that global government is the only escape hatch.
Thank you for reading! I definitely agree that liberalism has tons of other important qualities. I wanted to make an argument solely with the language of existential risk though for two reasons:
This is fair. I got a little sloppy with my language there, but elsewhere I note that Bostrom is arguing for this state "pro-tanto" not "all things considered." My reading is that the panopticon proposal is mostly a rhetorical strategy to give a concrete image of what the massive expected values of existential risk might justify.
I still think that his narrow claim is wrong though. "Mass surveillance would be necessary given "a biotechnological black ball that is powerful enough that a single malicious use could cause a pandemic that would kill billions of people."
Even in the conditional world where the deadly pandemic exists, mass surveillance is only good if it is used to actually stop the pandemic and does not cause more harm afterwards. I don't think either of these things are very likely if they're attached to any form of government we're familiar with. Mass surveillance isn't even necessary, it's just one possible technological solution to a bio-tech black ball. Really good vaccines or PPE or genetic improvements to the immune system would also suffice.
Thank you for reading and for the kind words :)