This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
Effective Altruism Forum
EA Forum
Login
Sign up
AGI misalignment x-risk may be lower due to an overlooked goal specification
technology
by
johnjnay
Oct 21 2022
1 min read
1
20
AI safety
Policy
AI alignment
AI governance
AI risk
Aligned AI
Artificial intelligence
Law
Ethics of artificial intelligence
Frontpage
Reactions
0
0
Mentioned in
10
Large Language Models as Corporate Lobbyists, and Implications for Societal-AI Alignment
Comments
1
Comment
Sorted by
New & upvoted
Click to highlight new comments since:
Today at 8:51 AM
johnjnay
Oct 26 2022
3
2
0
Related post:
Intent alignment should not be the goal for AGI x-risk reduction
Reply
More from
johnjnay
View more
Curated and popular this week
Relevant opportunities
View more
Related post: Intent alignment should not be the goal for AGI x-risk reduction