Research Engineering Intern at the Center for AI Safety. Helping to write the AI Safety Newsletter. Studying CS and Economics at the University of Southern California, and running an AI safety club there.
I’d be curious about what happens after 10. How long so biological humans survive? How long can they said to be “in control” of AI systems such that some group of humans could change the direction of civilization if they wanted to? How likely is deliberate misuse of AI to cause an existential catastrophe, relative to slowly losing control of society? What are the positive visions of the future, and which are the most negative?
Yep, I think those kinds of interventions make a lot of sense. The natural selection paper discusses several of those kinds of interventions in sections 4.2 and 4.3. The Turing Trap also makes an interesting observation about US tax law: automating a worker with AI would typically reduce a company's tax burden. Bill DeBlasio, Mark Cuban, and Bill Gates have all spoken in favor of a robot tax to fix that imbalance.
That’s a good point! Joe Carlsmith makes a similar step by step argument, but includes a specific step about whether the existence of rogue AI would lead to catastrophic harm. Would have been nice to include in Bengio’s.
Carlsmith: https://arxiv.org/abs/2206.13353
Hey, tough choice! Personally I’d lean towards PPE. Primarily that’s driven by the high opportunity cost of another year in school. Which major you choose seems less important than finding something you love and doing good work in it a year sooner.
Two other factors: First, you can learn AI outside of the classroom fairly well, especially since you can already program. I’m an economics major who’s taken a few classes in CS and done a lot of self-study, and that’s been enough to work on some AI research projects. Second, policy is plausibly more important for AI safety than technical research. There’s been a lot of government focus on slowing down AI progress lately, while technical safety research seems like it will need more time to prepare for advanced AI. The fact that you won’t graduate for a few years mitigates this a bit — maybe priorities will have changed by the time you graduate.
What would you do during a year off? Is it studying PPE for one year? I think a lot of the value of education comes from signaling, so without a diploma to show for it this year of PPE might not be worth much. If there’s a job or scholarship or something, that might be more compelling. Some people would suggest self-study, but I’ve spent time working on my own projects at home, and personally I found it much less motivating and educational than being in school or working.
Those are just my quick impressions, don’t lean too much on anyone (including 80K!). You have to understand the motivations for a plan for yourself in order to execute it well. Good luck, always happy to chat about stuff.
Very interesting article. Some forecasts of AI timelines (like BioAnchors) are premised on compute efficiency continuing to progress as it has for the last several decades. Perhaps these arguments are less forceful against 5-10 year timelines to AGI, but they're still worth exploring.
I'm skeptical of some of the headwinds you've identified. Let me go through my understanding of the various drivers of performance, and I'd be curious to hear how you think of each of these.
Parallelization has driven much of the recent progress in effective compute budgets. Three factors enable parallelization:
Overall, I don't see strong evidence that any of these factors are hitting strong barriers. Instead, the most relevant trend I see in the next 5 years is the rise of AI programming assistants, which could significantly accelerate progress in kernel optimization and algorithms.
I'd highlight two other factors affecting effective compute budgets:
Overall, I used to argue that AI progress will soon slow. But I've lost a lot of Bayes points to folks like Elon, Sam Altman, and Daniel Kokotajlo. A slowdown is entirely possible, perhaps even likely. But it's a live possibility that the world could be transformed in a span only a few years by human-level AI. Safety efforts should address the full range of possible outcomes, but short timelines scenarios are the most dangerous and most neglected, so that's where I'm focusing most of my attention right now.
Hey, great opportunity! It looks like a lot of these opportunities are in-person. Do you know if there are any substantial number of remote opportunities?