Effective Altruism Forum Logo
Effective Altruism Forum
Effective Altruism Forum Logo
EA Forum

AGI misalignment x-risk may be lower due to an overlooked goal specification technology

by johnjnay
Oct 21 20221 min read 1

20

AI safetyPolicyAI alignmentAI governanceAI riskAligned AIArtificial intelligenceLawEthics of artificial intelligence
Frontpage

20

0
0

Reactions

0
0
Mentioned in
10Large Language Models as Corporate Lobbyists, and Implications for Societal-AI Alignment
Comments1
Sorted by
New & upvoted
Click to highlight new comments since: Today at 8:51 AM
johnjnay
Oct 26 20223
2
0

Related post: Intent alignment should not be the goal for AGI x-risk reduction 

Reply
Crossposted from LessWrong Dev. Click to view.
More from johnjnay
View more
Curated and popular this week
Relevant opportunities
View more