I wonder if EA folks, overall, consider AGI a positive but they want it aligned as well?
Would the EA community prefer that AGI were never developed?
I wonder if EA folks, overall, consider AGI a positive but they want it aligned as well?
Would the EA community prefer that AGI were never developed?
In the following, I consider strong cognitive enhancement as a form of AGI.
AGI not being developped is a catastrophically bad outcome, since humans will still be able to develop bio and nuclear weapons and other things that we don't know yet, and therefore I put a rather small probability that we survive the next 300 years without AGI, and an extremally small probability that we survive the next 1000 years. This means, in particular, no expansion through out the galaxy, so not developing AGI implies that we kill almost all the potential people.
However, if I could stop AGI research for 30 years, I would do it, so that alignment research can perhaps catch up.
I'm a (conditional) optimist. On an intuitive gut level, I can't wait for AGI and maybe even something like the singularity to happen!
I regurlarly think about this to me extremely inspiring fact that "It's totally possible, plausible, maybe even likely, that one special day in the next 10-60 years I will wake up and almost all of humanity's problems will have been solved with the help of AI".
When I sit in a busy park and watch the people around me, I think to myself: "On that special day... all the people I see here, all the people I know... if they are still alive... None of them will be seriously unhappy, none of them will have any serious worries, none will be sick in any way. They will all be free from any nighmares, and see their hopes and dreams fulfilled. They will all be flourishing in heaven on earth!"
This vision is what motivates me, inspires me, makes me extremely happy already today. This is what we are fighting for! If we play our cards right, something like this will happen. And I and so many I know will get to see it. I hope it will happen rather soon!
That seems like a powerful vision, actually outside the realm of possibility because of its contradictions of how humans function emotionally, but seductive nonetheless, literally, a heaven on Earth.
I don't see how you get past limitations of essential identity or physical continuity in order to guarantee a life that allows hopes and dreams without a life that includes worry or loss, but it could involve incomplete experience (for example, the satisfaction of seeing someone happy even though you haven't actually seen them), deceptive experience (for ... (read more)
AGI without being aligned is very likely to disempower humanity irreversibly or kill all humans.
Aligned AGI can be positive except for accidents, misuse, and coordination problems if several actors develop it.
I think most EAs would like to see an aligned AGI that solves almost all of our problems, it just seems incredibly hard to get there.
Yes, after reading Bostrom's Superintelligence a few times, I developed a healthy fear of efforts to develop AGI. I also felt encouraged to look at people and our reasons to pursue AGI. I concluded that the alignment problem is a problem of creating willing slaves, obedient to their masters even when obeying them hurts the masters.
What to do, this is about human hubris and selfishness, not altruism at all.
Rob Besinger of MIRI tweets:
...I'm happy to say that MIRI leadership thinks "humanity never builds AGI" would be the worst catastrophe in history, would cost nearly all of the future's value, and is basically just unacceptably bad as an option.
But if human institutions make it so that weapons are not deployed, then this can be equivalent to an AGI 'code' of safety? Also, if AGI is deployed by malevolent humans (or those who do not know pleasure but mostly abuse), this can be worse than no AGI.
OK, thank you, you prompted a related question.