I think this is likely different from AIS being "solved" and is necessarily contextualized in the full breadth of the world's most pressing problems, including other x-risks and s-risks, and their relative neglectedness.
One thing to keep in mind is that nothing is static. Just as attention and resources towards AIS may ebb and flow in the coming years, so will attention to other highly pressing problems, like other x-risks and s-risks.
But let's ignore that for now and go through a few sketches that might turn into BOTECs.
80000 Hours currently estimates that 10s of millions of quality-adjusted dollars are spent on AIS. Commenters estimate about 300M/year is spent on AIS in 2022, while roughly 1 billion a year (quality-adjusted) is spent on reducing bio x-risks.
So first order, if you think bio x-risk is 10x less important ∩ tractable than AIS, then at the point where AIS has roughly 1 billion quality-adjusted dollars/year spent on it, then bio-risk is sufficiently relatively neglected that a moderate comparative advantage for someone generally talented should push them to work against bio-risks over AIS. Similarly, at 10x AIS importance ∩ tractability compared to biorisk, then you should consider AIS and bio x-risk equally neglected relative to other factors at the $10 B/year mark. To be clear, this is just saying that AIS is no longer "most neglected relative to its importance" which is a very high bar; even at 10B/year it'd arguably still be extremely neglected in absolute terms.
Likewise, if you think bio x-risk is 100x less important ∩ tractable than AIS, the above numbers should be $10B/year and $100B/year, respectively.
(Some people think the difference is much more than 100x but I don't personally find the arguments convincing after having looked into it non-trivially. That said, I don't have much access to private information, and no original insights).
However, as mentioned in the beginning, this assumes, likely incorrectly, that resources on bio-x-risk is relatively static. To the extent this assumption is false, you'd need to dynamically adjust this estimate over time.
I mention bio x-risk because it's probably the most directly comparable problem that's important, neglected and also relatively scalable. If we're thinking about decisions on the level of the individual rather than say from the movement or large funders, so there's no scalability constraint, there are plausibly at least a few other options that's already both extremely important and more neglected than AI safety such that it makes sense for nonzero people who are unusually suited for such work to work on; e.g. here's a recent Forum argument for digital consciousness.