tl;dr An indefinite AI pause is a somewhat plausible outcome and could be made more likely if EAs actively push for a generic pause. I think an indefinite pause proposal is substantially worse than a brief pause proposal, and would probably be net negative. I recommend that alternative policies with greater effectiveness and fewer downsides should be considered instead.
Broadly speaking, there seem to be two types of moratoriums on technologies: (1) moratoriums that are quickly lifted, and (2) moratoriums that are later codified into law as indefinite bans.
In the first category, we find the voluntary 1974 moratorium on recombinant DNA research, the 2014 moratorium on gain of function research, and the FDA’s partial 2013 moratorium on genetic screening.
In the second category, we find the 1958 moratorium on conducting nuclear tests above the...
I'm sure this is a very unpopular take but I feel obliged to share it: I find the "pausing AI development is impossible" arguments extremely parallel to the "economic degrowth in rich countries is impossible" arguments; and the worse consequences for humanity (and its probabilities) of not doing doing them not too dissimilar. I find it baffling (and epistemically bad) how differently these debates are treated within EA.
Although parallel arguments can be given for and against both issues, EA have disregarded the possibility to degrowth the economy in rich countries without engaging the arguments. Note that degrowthers have good reasons to believe that continued economic growth would lead to ecological collapse --which could be considered an existential risk as, although it would clearly not lead to the...
This is a copy of the English version a statement released yesterday by a group of academics that can be seen at https://www.existentialriskstudies.org/statement/. The Spanish translation, by Mónica A. Ulloa Ruiz, will be put on the forum soon
This statement was drawn up by a group of researchers from a variety of institutions who attended the FHI and CSER Workshop on Pluralisms in Existential Risk Studies from 11th-14th May 2023. It conveys our support for the necessity for the community concerned with existential risk to be pluralistic, containing a diversity of methods, approaches and perspectives, that can foster difference and disagreement in a constructive manner. We recognise that the field has not yet achieved this necessary pluralism, and commit to bring about such pluralism. A list of researchers...
Just a thought here. I am not sure if you can literally read this as EA being overwhelmingly left, as it depends a lot on your view point and what you define as "left". EA exists both in the US and Europe. Policy positions that are seen as left and especially center left in the US would often be more on the center or center right spectrum in Europe.
Join each Sunday to talk with Christians about Effective Altruism! (2 PM NYC time, 7PM London time)
https://us02web.zoom.us/j/4161143480
All are welcome.
This week we discuss: Economic Growth as an EA Cause
This week we discuss a common theme in the development literature and also a common critique of EA: the importance of economic growth, which arguably dwarfs the significance of one-off interventions. Do we know what causes growth? How can we find out?
We meet first over Zoom: https://us02web.zoom.us/j/4161143480 .
The first 15 minutes are introductions and announcements, followed by a 10-15 minute intro to the topic, followed by 30 minutes of breakout room discussions.
After about 1 hour, we may move over to gather for more friendly discussions / hangout.
Note: This post contains personal opinions that don’t necessarily match the views of others at CAIP.
Advanced AI has the potential to cause an existential catastrophe. In this essay, I outline some policy ideas which could help mitigate this risk. Importantly, even though I focus on catastrophic risk here, there are many other reasons to ensure responsible AI development.
I am not advocating for a pause right now. If we had a pause, I think it would only be useful insofar as we use the pause to implement governance structures that mitigate risk after the pause has ended.
This essay outlines the important elements I think a good governance structure would include: visibility into AI development, and brakes that the government could use to stop dangerous AIs from being built.
First, I’ll summarize some claims...
I'll be looking forward to hearing more about your work on whistleblowing! I've heard some promising takes about this direction. Strikes me as broadly good and currently neglected.
This post was originally intended as a follow-up post to Josh’s Are short timelines actually bad?, but given the AI pause debate, I’ve adapted it slightly and forced myself to get it to a readable form, it’s still a draft.
In terms of the debate, this post is relevant because I think people often believe that it is better if AGI development happens later compared to sooner (hope AI timelines are long). I used to believe this, and now I think it’s incredibly unclear and we should be very uncertain. Josh’s post covers some arguments for why acceleration may be good: Avoid/delay a race with China, Smooth out takeoff (reduce overhangs), Keep the good guys in the lead. In this post I discuss two other arguments that might point toward acceleration: AGI development centralization...
Meta’s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models – which will have more dangerous capabilities – we call on Meta to take responsible release seriously and stop irreversible proliferation. Join us for a peaceful protest at Meta’s office in San Francisco at 250 Howard St at 4pm PT.
RSVP on Facebook[1] or through this form.
Let’s send a message to Meta:
I agree, it seems like there is a pretty big knowledge gap here on protests, more than I had thought. I’ll bump stirring a doc like this up in priority.
in general I think it's much easier for people to do great research and actually figure stuff out when they're viscerally interested in the problems they're tackling, and excited about the process of doing that work.
Totally. But OP kinda made it sound like the fact that you found 2 depressing was evidence it was the wrong direction. I think advocacy could be fun and full of its own fascinating logistical and intellectual questions as well as lots of satisfying hands-on work.
This is a collection of resources that I recommend for how, and why, to pursue a career in animal advocacy - particularly if you think animal advocacy might not be for you.
At EAGx Australia 2023, I'm giving a lightning talk on why and how to pursue a career in animal advocacy. I thought I'd make an EA Forum post with links to everything I talk about, as it may help to have all of these resources in one place. The slides from my lightning talk are available here.
This is a great list of resources! One thing I'd add is that the effective animal advocacy space is pretty seriously funding constrained right now, and I don't see any signs that the situation is likely to change in the next few years. For that reason, I think it's worth calling out earning to give as a potentially uniquely promising path to impact. Animal Advocacy Careers had a good post on ETG for animals a few months ago.
Maybe so. But I can't really see mechanistic interpretability being solved to a sufficient degree to detect an AI playing the training game, in time to avert doom. Not without a long pause first at least!