Not really? Given my personal impression of the difficulty of the alignment problem, I would consider humanity very lucky if AGI managed to follow any set of human-defined values at all.
Also, it seems that most downsides of totalitarian regimes ultimately boil down to a lower quality of life among citizens. (For instance, a government that suppresses dissent is bad. But dissent is only valuable in that it may lead to reforms of the government, which may lead to improved lives for citizens.) Strong AI, if truly aligned with a government's aims, would probably increase the average person's quality of life to the point where this wouldn't be an issue. (Even totalitarian governments presumably prefer a better quality of life for their citizens, all else equal.)
Not really? Given my personal impression of the difficulty of the alignment problem, I would consider humanity very lucky if AGI managed to follow any set of human-defined values at all.
Also, it seems that most downsides of totalitarian regimes ultimately boil down to a lower quality of life among citizens. (For instance, a government that suppresses dissent is bad. But dissent is only valuable in that it may lead to reforms of the government, which may lead to improved lives for citizens.) Strong AI, if truly aligned with a government's aims, would probably increase the average person's quality of life to the point where this wouldn't be an issue. (Even totalitarian governments presumably prefer a better quality of life for their citizens, all else equal.)