Effective Altruism Forum Logo
Effective Altruism Forum
Effective Altruism Forum Logo
EA Forum

Intent alignment should not be the goal for AGI x-risk reduction

by johnjnay
Oct 26 20221 min read 1

7

AI safetyExistential riskAI riskAI alignmentAligned AIAI governanceArtificial intelligence
Frontpage

7

0
0

Reactions

0
0
Comments1
Sorted by
New & upvoted
Click to highlight new comments since: Today at 10:21 AM
johnjnay
Oct 26 20223
2
0

Related post: AGI misalignment x-risk may be lower due to an overlooked goal specification technology.

Reply
Crossposted from LessWrong Dev. Click to view.
More from johnjnay
View more
Curated and popular this week
Relevant opportunities
View more