Currently doing local AI safety Movement Building in Australia and NZ.
It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.
That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.
Nonetheless, this seems like a useful concept for thinking about what the future might look like.
And because microaggression and internalised racism (MIR) may come across as “culture war” loaded terms (despite them also being academic terms)
You seem to be assuming that just because something is an academic term that it isn't culture war loaded, despite the fact that some of these fields don't actually see objectivity as having any value.
(I actually upvoted this post because it is very well written and I appreciate you taking all of this time to define a key term).
I think it’s very valuable for you to state what the proposition would mean in concrete terms.
On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.
AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.
If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.
I’m pretty bullish on having these kinds of debates. While EA is doing well at having an impact in the world, the forum has started to feel intellectually stagnant in some ways. And I guess I feel that these debates provide a way to move the community forward intellectually. That's something I've been feeling has been missing for a while.
You wrote that governance is more important than technical research. Have you considered technical work that supports governance? The AI Safety Fundamentals course has a week on this.
In any case, working in AI or AI safety would increase your credibility for any activism that you decide to engage in.
I'm just going to make a suggestion: if you're planning to make a submission, I recommend taking a tiny bit of extra time and commenting on this thread with any suggestions for submissions. I'll also remind readers that there is an option to be notified of new comments if you click on the three dots menu.