Effective Altruism Forum Logo
Effective Altruism Forum
Effective Altruism Forum Logo
EA Forum

Why Not Try Build Safe AGI?

List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans

by Remmelt
Dec 24 20221 min read 0

3

AI safetyAI alignmentAI risk
Frontpage

3

0
0

Reactions

0
0
Previous:
List #1: Why stopping the development of AGI is hard but doable
2 comments24 karma
Next:
List #3: Why not to assume on prior that AGI-alignment workarounds are available
No comments6 karma
Log in to save where you left off
Comments
Crossposted from LessWrong Dev. Click to view.
No comments on this post yet.
Be the first to respond.
More from Remmelt
View more
Curated and popular this week
Relevant opportunities
View more