Heya, after having finished by bachelor’s in Biomedical Engineering at TU/e I’m now working on various EA related projects in the Netherlands. The main one for this academic year (2022-2023) will be EA Eindhoven, for which me and my co-organizer are part of CEA's University Group Accelerator Program. I’m a generalist at heart who’s fundamentally driven by a desire to understand how this endlessly fascinating and complex world of ours works, and how we can use this understanding to make the world we pass on a better place.
Advice and support for how we can increase the capacity for the amount of people working on the most pressing problems in continental Europe. If counterfactual impact is what we care about, a lot of potential presumably lies in building up this capacity, rather than redirecting everyone to existing opportunities in the US and UK.
Setting up new university groups and brainstorming on what kind of ambitious projects to embark on once you have a stable group going.
Somewhat sceptical of this, mainly because of the first 2 counterarguments mentioned:
- In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting.
- We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.
Focusing on the underlying search for what is most impactful seems a lot more robust than focusing on the main opportunity this search currently nets. An EA/longtermist is likely to take x-risk seriously as long as this is indeed a top priority, but you can't flip this. The ability of the people working on the world's most pressing problems updating on what is most impactful to work on (arguable the core of what makes EA 'work') would decline without any impact-driven meta framework.
An "x-risk first" frame could quickly become more culty/dogmatic and less epistemically rigorous, especially if it's paired with a lower resolution understanding of the arguments and assumptions for taking x-risk reduction (especially) seriously, less comparison with and dialogue between different cause areas, and less of a drive for keeping your eyes and ears open for impactful opportunities outside of the thing you're currently working on, all of which seems hard to avoid.
It definitely makes sense to give x-risk reduction a prominent place in EA/longtermist outreach, and I think it's important to emphasize that you don't need to "buy into EA" to take a cause area seriously and contribute to it. We should probably also build more bridges to communities that form natural allies. But I think this can (and should) be done while maintaining strong reasoning transparency about what we actually care about and how x-risk reduction fits in our chain of reasoning. A fundamental shift in framing seems quite rash.
EDIT:
More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.
Agreed that more experimentation would be welcome though!
I really want to create an environment in my EA groups that's high in what is labelled "psychological safety" here, but it's hard to make this felt known to others, especially in larger groups. The best I've got is to just explicitly state the kind of environment I would like to create, but I feel like there's more I could do. Any suggestions?
What do the recent developments mean for AI safety career paths? I'm in the process of shifting my career plans toward 'trying to robustly set myself up for meaningfully contributing to making transformative AI go well' (whatever that means), but everything is developing so rapidly now and I'm not sure in what direction to update my plans, let alone develop a solid inside view on what the AI(S) ecosystem will look like and what kind of skillset and experience will be most needed several years down the line.
I'm mainly looking into governance and field building (which I'm already involved in) over technical alignment research, though I want to ask this question in a more general sense since I'm guessing it would be helpful for others as well.
The Existential Risk Observatory aims to inform the public about existential risks and recently published this, so maybe consider getting in touch with them.
If that's your goal, I think you should try harder to understand why core org EAs currently don't agree with your suggestions, and try to address their cruxes. For this ToC, "upvotes on the EA Forum" is a useless metric--all you should care about is persuading a few people who have already thought about this all a lot. I don't think that your post here is very well optimized for this ToC.
... I think the arguments it makes are weak (and I've been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)
If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn't there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes.
As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.
Does anyone know of good resources for getting better at forecasting, rather than just practicing randomly? I'm really looking forward to this course getting released, but it's still in the works.
Here's the EAG London talk that Toby gave on this topic (maybe link it in the post?).