I feel like I'm taking crazy pills.
It appears that many EAs believe we shouldn't pause AI capabilities until it can be proven to have < ~ 0.1% chance of X-risk.
Put less confusingly, it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.
This feels obviously a terrible thing to me.
What are the best reasons EA shouldn't be pushing for an indefinite pause on AI capabilities development??
Thanks for the comment Zach.
1. Can you elaborate on your comment "Tractability"?
2. I'm less worried about multipolarity because the leading labs are so far ahead AND I have short timelines (~ 10 years). My guess is if you had short timelines, you might agree?
3. If we had moderate short term success, my intuition is that we've actually found an effective strategy that could then be scaled. I worry that your thinking is basically pointing to 'it needs to be an immediately perfect strategy or don't bother!'