Thanks for your reply. I mostly agree with many of the things you say, but I still think work to reduce the amount of emission rights should at least be on the list of high-impact things to do, and as far as I'm concerned, significantly higher than a few other paths mentioned here.
If you'd still want to do technology-specific work, I think offshore solar might also be impactful and neglected.
As someone who has worked in sustainable energy technology for ten years (wind energy, modeling, smart charging, activism) before moving into AI xrisk, my favorite neglected topic is carbon emission trading schemes (ETS).
ETSs such as implemented by the EU, China, and others, have a waterbed effect. The total amount of emissions is capped, and trading sets the price of those emissions for all sectors under the scheme (in the EU electricity, heavy industry, expanding to other sectors). That means that:
It's just crazy to think about all the good-hearted campaigning, awareness creation, hard engineering work, money, etc that is being directed to decreasing emissions for a sector that's covered by an ETS. To my best understanding, as long as ETS is working correctly, this effort is completely meaningless. At the same time, I knew of exactly one person trying to reduce ETS emission rights based in my country, the Netherlands. This was the only person potentially actually achieving something useful for the climate.
If I would want to do something neglected in the climate space, I would try to inform all those people currently wasting their energy that what they should really do is trying to reduce the amount of ETS emission rights and let the market figure out the rest. (Note that several of the trajectories recommended above, such as working on nuclear power, reducing industry emissions, and deep geothermal energy (depending on use case) are all contained in ETS (at least in the EU) and improvements would therefore not benefit the climate).
If countries or regions have an ETS system, successful emission reduction should really start (and basically stop) there. It's also quite a neglected area so plenty of low hanging fruit!
I don't know if everyone should drop everything else right now, but I do agree that raising awareness about AI xrisks should be a major cause area. That's why I quit my work on the energy transition about two years ago to found the Existential Risk Observatory, and this is what we've been doing since (resulting in about ten articles in leading Dutch newspapers, this one in TIME, perhaps the first comms research, a sold out debate, and a passed parliamentary motion in the Netherlands).
I miss two significant things on the list of what people can do to help:
1) Please, technical people, work on AI Pause regulation proposals! There is basically one paper now, possibly because everyone else thought a pause was too far outside the Overton window. Now we're discussing a pause anyway and I personally think it might be implemented at some point, but we don't have proper AI Pause regulation proposals, which is a really bad situation. Researchers (both policy and technical), please fix that, fix it publicly, and fix it soon!
2) You can start institutes or projects that aim to inform the societal debate about AI existential risk. We've done that and I would say it worked pretty well so far. Others could do the same thing. Funders should be able to choose from a range of AI xrisk communication projects to spend their money most effectively. This is currently really not the case.
Hi Vasco, thank you for taking the time to read our paper!
Although we did not specify this in the methodology section, we addressed the "mean variation in likelihood" between countries and surveys throughout the research such as in section 4.2.2. I hope this clarifies your question. This aspect should have been better specified in the methodology section.
If you have any more questions, do not hesitate to ask.
I hope that this article sends the signals that pausing the development of the largest AI-models is good, informing society about AGI xrisk is good, and that we should find a coordination method (regulation) to make sure we can effectively stop training models that are too capable.
What I think we should do now is:
1) Write good hardware regulation policy proposals that could reliably pause the development towards AGI.
2) Campaign publicly to get the best proposal implemented, first in the US and then internationally.
This could be a path to victory.
Hi Peter, thanks for your comment. We do think the conclusions we draw are robust based on our sample size. If course it depends on the signal: if there's a change in e.g. awareness from 5% to 50%, a small sample size should be plenty to show that. However, if you're trying to measure a signal of only 1% difference, your sample size should be much larger. While we stand by our conclusions, we do think there would be significant value in others doing similar research, if possible with larger sample sizes.
Again, thanks for your comments, we take the input into account.