Hide table of contents

Recently, I've heard a number of people I consider informed and credible posit that AIS may either no longer be neglected or may be nearing a point at which it is no longer neglected. Predictably, this was met with pushback of varying intensity, including from comparably informed and credible people.

I think it would be helpful to have a shared and public vision of what it would look like for AIS to no longer be neglected by general EA standards, or a metric of some sort for the degree of neglectedness or the specific components that are neglected. I think this is likely different from AIS being "solved" and is necessarily contextualized in the full breadth of the world's most pressing problems, including other x-risks and s-risks, and their relative neglectedness. This seems like an important bar to establish and hold community progress against. Maybe this already exists and I've missed it.

100

0
0

Reactions

0
0
New Answer
New Comment

7 Answers sorted by

I think at the very least, I'd expect non-neglected AI safety to look like the global campaigns against climate change, or the US military-industrial complex:

  • something that governments spend hundreds of billions of dollars on
  • a vast ecosystem of nonprofits, think tanks, etc, with potentially tens of thousands of people just thinking about strategy, crafting laws, etc
  • legions more people working on specific technologies that will help with various niche aspects of the problem, like more efficient solar panels or etc
  • lots of people who don't think of themselves as doing anything altruistic at all, but they are helping because they are employed (directly or indirectly) by the vast system that is dedicated to solving the problem
  • a very wide variety of approaches, including backup options (like geoengineering in the case of climate)
  • there is more work to do, but it seems like we are at least plausibly on-track to deal with the problem in an adequate way

Just think about the incredible amount of effort that is put forward by, say, the United States in order to try and prevent Chinese military dominance and deter things like an invasion of Taiwan (and worse things like nuclear war), and how those efforts get filtered into thousands and thousands of individual R&D projects, institutions, agreements, purchases, etc... certainly some of the effort is misdirected and some projects/strategies are more impactful than others. Overall, I feel like this (or, similarly, the global effort to transition away from fossil fuels) is the benchmark for a pressing global problem being truly non-neglected in an objective sense.

People you hear in conversation might be using "non-neglected" to refer to a much higher bar, like "AI safety is no longer SO INCREDIBLY neglected that working on it is AUTOMATICALLY the most effective thing you can do, overruling other causes even if you have a big comparative advantage in some other promising area." This might be true, depending on your personal situation and aptitudes! I certainly hope that AI safety becomes less neglected over time, and I think that has slowly been happening. But in a societal / objective sense I think we still need a ton more work on AI safety.

The thing to see is if the media attention translates into  action with more than a few hundred people working on the problem as such rather than getting distracted, and government prioritizing it in conflict with competing goals (like racing to the precipice). One might have thought Covid-19 meant that GCBR pandemics would stop being neglected,  but  that doesn't seem right. The Biden administration has asked for Congressional approval of a pretty good pandemic prevention bill (very similar to what EAs have suggested)  but it has been rejected because it's still seen as a low priority. And engineered pandemics remain off the radar with not much improvement as a result of a recent massive pandemic.

AIS has always had outsized media coverage relative to people actually doing something  about it, and that may continue.

I feel like this does not really address the question?

A possible answer to Rockwell's question might be "If we have 15000 scientists working full-time on AIS, then I consider AIS to no longer be neglected" (this is hypothetical, I do not endorse it. And its also not as contextualized as Rockwell would want it).

But maybe I am interpreting the question too literally and you are making a reasonable guess what Rockwell wants to hear.

Hi Carl,

Do you have any thoughts on how the expected impact of the few hundred people working most directly on AGI safety compares with that of the rest of the world (on mitigating the risks from advanced AI)? I suppose a random person from the few hundred people will have much greater (positive/negative) impact than a random person, but it is not obvious to me that we can round the (positive/negative) impact of the rest of the world (on mitigating the risks from advanced AI) to zero, and increased awareness of the rest of the world will tend to increase i... (read more)

4
CarlShulman
I think there are whole categories of activity that are not being tried by the broader world, but that people focused on the problem attend to, with big impacts in both bio and AI. It has its own diminishing returns curve.

I think this is likely different from AIS being "solved" and is necessarily contextualized in the full breadth of the world's most pressing problems, including other x-risks and s-risks, and their relative neglectedness.

One thing to keep in mind is that nothing is static. Just as attention and resources towards AIS may ebb and flow in the coming years, so will attention to other highly pressing problems, like other x-risks and s-risks.

But let's ignore that for now and go through a few sketches that might turn into BOTECs. 

80000 Hours currently estimates that 10s of millions of quality-adjusted dollars are spent on AIS. Commenters estimate about 300M/year is spent on AIS in 2022, while roughly 1 billion a year (quality-adjusted) is spent on reducing bio x-risks.

So first order, if you think bio x-risk is 10x less important ∩ tractable than AIS, then at the point where AIS has roughly 1 billion quality-adjusted dollars/year spent on it, then bio-risk is sufficiently relatively neglected that a moderate comparative advantage for someone generally talented should push them to work against bio-risks over AIS. Similarly, at 10x AIS importance ∩ tractability compared to biorisk, then you should consider AIS and bio x-risk equally neglected relative to other factors at the $10 B/year mark. To be clear, this is just saying that AIS is no longer "most neglected relative to its importance" which is a very high bar; even at 10B/year it'd arguably still be extremely neglected in absolute terms.[1]

Likewise, if you think bio x-risk is 100x less important ∩ tractable than AIS, the above numbers should be $10B/year and $100B/year, respectively.

(Some people think the difference is much more than 100x but I don't personally find the arguments convincing after having looked into it non-trivially. That said, I don't have much access to private information, and no original insights).

However, as mentioned in the beginning, this assumes, likely incorrectly, that resources on bio-x-risk is relatively static. To the extent this assumption is false, you'd need to dynamically adjust this estimate over time.

I mention bio x-risk because it's probably the most directly comparable problem that's important, neglected and also relatively scalable. If we're thinking about decisions on the level of the individual rather than say from the movement or large funders, so there's no scalability constraint, there are plausibly at least a few other options that's already both extremely important and more neglected than AI safety such that it makes sense for nonzero people who are unusually suited for such work to work on; e.g. here's a recent Forum argument for digital consciousness.

  1. ^

    Note that the world probably spends 10s of billions a year on reducing climate risk, and similar or greater amounts on ice cream.

(rushed comment, but still thought it was worth posting. )

I'm not sure what the "quality adjusted" dollars means, but in terms of dollars, I think net spend on AI safety is more like 200M / year instead of 10s of millions. 

Very rough estimates for 2022: 

From OP's website, it looks looks like

  • 15M to a bunch of academics 
  • 13M to something at MIT 
  • 10M to Redwood 
  • 10M to Constellation
  • 5M to CAIS
  • ~25M of other grants (e.g. CNAS, SERI MATS)

Adds up to like 65M 

EA Funds spends maybe 5M / year on AI Safety? I'd be very surprised if it was <1M / year.

FTX gave maybe another 100M of AI Safety related grants, not including Anthropic ( I estimate) 

That gives 150M. 

I also think lab spending such as Anthropic, OpenAI, and DeepMind's safety team should be counted here. I'd put this at like 50M / year, which gives a lower bound total of 200M in 2022, because other people might be spending money. 

I imagine that net spend in 2023 will be significantly lower than this though, 2022 was unusually high, likely due to FTX things. 

Of course, spending money does not equate with impact, it's pretty plausible that much of this money was spent very ineffectively.


 

5
Eli Rose
(+1 to this approach for estimating neglectedness; I think dollars spent is a pretty reasonable place to start, even though quality adjustments might change the picture a lot. I also think it's reasonable to look at number of people.) Looks like the estimate in the 80k article is from 2020, though the callout in the biorisk article doesn't mention it — and yeah, AIS spending has really taken off since then. I think the OP amount should be higher because I think one should count X% of the spending on longtermist community-building as being AIS spending, for some X. [NB: I work on this team.] I downloaded the public OP grant database data for 2022 and put it here. For 2022, the sum of all grants tagged AIS and LTist community-building is ~$155m. I think a reasonable choice of X is between 50% and 100%, so taking 75% at a whim, that gives ~$115m for 2022.
2
Linch
Makes sense, so order $300m total?
2
Linch
thanks, this is helpful!

Epistemic status: I feel fairly confident about this but recognize I’m not putting in much effort to defend it and it can be easily misinterpreted.

I would probably just recommend not using the concept of neglectedness in this case, to be honest. The ITN framework is a nice heuristic (e.g., usually more neglected things benefit more from additional marginal contributions) but it is ultimately not very rigorous/logical except when contorted into a definitional equation (as many previous posts have explained). Importantly, in this case I think that focusing on neglectedness is likely to lead people astray, given that a change in neglectedness could equate to an increase in tractability.

This was asked here, though I'm not super sure I found the answer convincing.

We should expect it to pivot to a sort of climate change situation, such that the ITN argument looks like a request to adjust for the competence of the attention. 

Which I don't think matters if you're seasoned enough to have invested capital into your AI theory of change already, but could matter a lot if you're at the top of the funnel / movement building especially for open/uncommitted people who seem down with EA in a broad sense. 

I'm confused about whether to expect higher salience / more competition to lead to a rat race like becoming an elected official or a rat race like becoming a surgeon. Rat race could mean massive goodhart taxes or "race to the bottom" (elections) or it could be quality assurance "meritocracy is actually working as intended" (surgeons).

In my extreme case, I think it looks something like we’ve got really promising solutions in the work for alignment and they will be on time to actually be implemented.

Or perhaps, in the case where solutions aren’t forthcoming, we have some really robust structures (international govs and big AI labs or something coordinating) to avoid developing AGI.

I think we should keep "neglectedness" referring to the amount of resources invested in the problem, not P(success). This seems a better fit for the "tractability" bucket.

Curated and popular this week
Relevant opportunities