I feel like this does not really address the question?
A possible answer to Rockwell's question might be "If we have 15000 scientists working full-time on AIS, then I consider AIS to no longer be neglected" (this is hypothetical, I do not endorse it. And its also not as contextualized as Rockwell would want it).
But maybe I am interpreting the question too literally and you are making a reasonable guess what Rockwell wants to hear.
I think most probabilistic estimates are subjective probability estimates. There are no complicated math models behind them usually.
Some people do make models, but then make subjective probability estimates. The math is typically not that complicated for these models, often just multiplying different probabilities together (which is imo not a good class of models for this kind of problem).
My guess would be that even some of the people who make models have different probability estimates for human extinction than the one that the model spits out, because they realize that their models have flaws and try to correct for that.
I'd like to challenge the assumption that EA is the “largest and smartest” expert group on “Might AI lead to extinction?”. I don't think this is true?
You seem to imply that there is another expert group which discusses the question of extinction from AI deeply (and you consider the possibility that the other group is in some sense "better" at answering the question)
Who are these people?
Can you give more details what "distributing resources over a wider group of people" means for you? Are you arguing that mentors should spend much less time per person and instead mentor 3 times as many people? Are you arguing that researchers should get half as much money so twice as many researchers can get funded?
A plausible hypothesis is that ordinary methods of distributing resources over a wider group of people don't unlock that many additional researchers. Then, if there is only infrastructure that can support a limited number of people, then it is not very surprising to me that there is a focus on so-called 'top talent'. All else being equal, you would rather have competent people. And there is probably not some central EA decision that favors a small number of researchers over a large number of researchers.
Some side remark:
For example in AI safety there are a few very well funded but extremely competitive programmes for graduates who want to do research in the field. Naturally their output is then limited to relatively small groups of people.
Naming the specific programmes might give you better answers here. People who want to answer have to speculate less, and if you are lucky the organizers of specific orgs might be inclined to answer.
An opposing trend seems to have gained traction in AI capability research as e.g. the "We have no moat" paper argued, where a load of output comes from the sheer mass of people working on the problem with a breadth-first approach.
They have much more resources than us.
If we assume that no AGI system develops its own goal(s), which I assign a probability of 0.2, then it is also necessary to consider whether any AGI system’s programmed goal(s) still leads to an EC. I assign this a probability of 0.04 because the human(s) who trained the AGI might not have thought out in enough detail what the consequences of programming the AGI with a specific goal or set of goals would be. The paperclip maximizer scenario is a classic example of this. Another scenario is if a nefarious human (or multiple nefarious humans) purposely creates and releases an AGI system with a destructive goal (or goals) that no human can control (including the person or people who released the AGI system) after it is discharged into the world.
I only see arguments for the 0.04 case, but not for the 0.96 case. Do you have any concrete goals in mind that would not result in an EC?
If I understand correctly, you claim to be 0.96 confident that not only outer alignment will be solved, but also that all AGIs will use some kind of outer alignment solution, and no agent builds an AGI with inadequate alignment. What makes you so confident?
There is the Carlsmith model: https://arxiv.org/abs/2206.13353
It is not very complicated though, and it is conjunctive (which does not feel fitting for AI X-risk). I doubt that Tyler Cowen will like it.
I think protesting/blocking fossil fuel companies is different and less of a unilateralist curse situation. For example, there is wide elite/expert agreement that more CO2 in the atmosphere is bad. We do not have that for the extinction of humanity due to AI. There also have been many protests against fossil fuel already, so additional protest is less likely to cause serious downsides or set the tone for future attempts to solve the problem. The nature of the problem is also different: incompetent political solutions to solve global warming often still help reduce CO2 somewhat, but the same might not be true for AI Notkilleveryoneism.
I am not sure whether "direct action" (imo a terrible name btw if the theory of change is indirect) against AI would be a good idea but lean against it currently.
A reason that is missing from the "contra" list: You could stay at a higher salary and donate the difference to a more cost-effective org than the one you work for.
I would expect that most people who work in EA do not work for the org that they consider to have the highest marginal impact for an additional dollar (although certainly some do).
Accepting a lower salary can be more tax-efficient than donating if the donation is not tax-deductible. But if you think that cost-effectiveness follows a power law, then its quite possible that there is an org is more than twice as cost-effectiveness than your current employer.