My blog is here. My personal site is here. You can contact me using this form.
You discuss three types of AI safety ventures:
- Infrastructure: Tooling, mentorship, training, or legal support for researchers.
- New AI Safety Organizations: New labs or fellowship programs.
- Advocacy Organizations: Raising awareness about the field.
Where would, for example, insurance for AI products fit in this? This is a for-profit idea that creates a natural business incentive to understand & research risks from AI products at a very granular level, and if it succeeds, it puts you into position to influence the entire industry (e.g. "we will lower your premiums if you implement safety measure X").
I agree that if you restrict yourself to either supporting AIS researchers, launching field-building projects or research labs, or doing advocacy, then you will in fact not find good startup ideas, for the structural reasons you do a good job of listing in your post, as well as the fact that these are all things people are already doing.
METR is a very good AIS org. In addition to just being really solid and competent, a lot of why they succeeded was that they started doing something that few people were thinking about at the time. Everyone and their dog is launching an evals startup today, but the real value is finding ideas like METR before they are widespread. If the startup ideas you consider are all about doing the same thing that existing orgs do, you will miss out on the most important ones.
I do agree that the intersection of impact & profit & bootstrappability is small and hard to hit, and there's no law of nature that says something should definitely exist there. But something exists in that corner, it will be a novel type of thing.
(reposted from a Slack thread)
I'd like to add an asterisk. It is true that you can and should support things that seem good while they seem good and then retract support, or express support on the margin but not absolutely. But sometimes supporting things for a period has effects you can't easily take back. This is especially the case if (1) added marginal support summons some bigger version of the thing that, once in place, cannot be re-bottled, or (2) increased clout for that thing changes the culture significantly (I think cultural changes are very hard to reverse; culture generally doesn't go back, only moves on).
I think there are many cases where, before throwing their lot in with a political cause for instrumental reasons, people should've first paused to think more about whether this is the type of thing they'd like to see more of in general. Political movements also tend to have an enormous amount of inertia, and often end up very influenced by by path-dependence and memetic fitness gradients.
I think it's worth trying hard to stick to strict epistemic norms. The main argument you bring against is that it's more effective to be more permissive about bad epistemics. I doubt this. It seems to me that people overstate the track record of populist activism at solving complicated problems. If you're considering populist activism, I would think hard about where, how, and on what it has worked.
Consider environmentalism. It seems quite uncertain whether the environmentalist movement has been net positive (!). This is an insane admission to have to make, given that the science is fairly straightforward, environmentalism is clearly necessary, and the movement has had huge wins (e.g. massive shift in public opinion, pushing governments to make commitments, & many mundane environmental improvements in developed country cities over the past few decades). However, the environmentalist movement has repeatedly spent enormous efforts on directly harming their stated goals through things like opposing nuclear power and GMOs. These failures seem very directly related to bad epistemics.
In contrast, consider EA. It's not trivial to imagine a movement much worse along the activist/populist metrics than EA. But EA seems quite likely positive on net, and the loosely-construed EA community has gained a striking amount of power despite its structural disadvantages.
Or consider nuclear strategy. It seems a lot of influence was had by e.g. the staff of RAND and other sober-minded, highly-selected, epistemically-strong actors. Do you want more insiders at think-tanks and governments and companies, and more people writing thoughtful pieces that swing elite opinion, all working in a field widely seen as credible and serious? Or do you want more loud activists protesting on the streets?
I'm definitely not an expert here, but by thinking through what I understand about the few cases I can think of, the impression I get is that activism and protest have worked best to fix the wrongs of simple and widespread political oppression, but that on complex technical issues higher-bandwidth methods are usually how actual progress is made.
I think there are also some powerful but abstract points:
(A) Call this "Request For Researchers" (RFR). OpenPhil has tried a more general version of this in the form of the Century Fellowship, but they discontinued this. That in turn is a Thiel Fellowship clone, like several other programs (e.g. Magnificent Grants). The early years of the Thiel Fellowship show that this can work, but I think it's hard to do well, and it does not seem like OpenPhil wants to keep trying.
(B) I think it would be great for some people to get support for multiple years. PhDs work like this, and good research can be hard to do over a series of short few-month grants. But also the long durations just do make them pretty high-stakes bets, and you need to select hard not just on research skill but also the character traits that mean people don't need external incentives.
(C) I think "agenda-agnostic" and "high quality" might be hard to combine. It seems like there are three main ways to select good people: rely on competence signals (e.g. lots of cited papers, works at a selective organisation), rely on more-or-less standardised tests (e.g. a typical programming interview, SATs), or rely on inside-view judgements of what's good in some domain. New researchers are hard to assess by the first, I don't think there's a cheap programming-interview-but-for-research-in-general that spots research talent at high rates, and therefore it seems you have to rely a bunch on the third. And this is very correlated with agendas; a researcher in domain X will be good at judging ideas in that domain, but less so in others.
The style of this that I'd find most promising is:
I think this would be better than a grab-bag of people selected according to credentials and generic competence, because I think an important part of the research talent selection process is the part where someone with good research taste endorses the agenda takes of someone else on agenda-specific inside-view grounds.
Yes, letting them specifically set a distribution, especially as this was implicitly done anyways in the data analysis, would have been better. We'd want to normalise this somehow, either by trusting and/or checking that it's a plausible distribution (i.e. sums to 1), or by just letting them rate things on a scale of 1-10 and then getting an implied "distribution" from that.
I agree that this is confusing. Also note:
Interestingly, the increase in perceived comfort with entrepreneurial projects is larger for every org than that for research. Perhaps the (mostly young) fellows generally just get slightly more comfortable with every type of thing as they gain experience.
However, this is additional evidence that ERI programs are not increasing fellows' self-perceived comfort with research any more than they increase fellows' comfort with anything. It would be interesting to see if mentors of fellows think they have improved overall; it may be that changes in self-perception and actual skill don't correlate very much.
And also note that fellows consistently ranked the programs as providing on average slightly higher research skill gain than standard academic internships (average 5.7 on a 1-10 scale where 5 = standard academic internship skill gain, see ""perceived skills and skill changes" section).
I can think of many possible theories, including:
The main way to answer this seems to be getting a non-self-rated measure of research skill change.
For "virtual/intellectual hub", the central example in my mind was the EA Forum, and more generally the way in which there's a web of links (both literal hyperlinks and vaguer things) between the Forum, EA-relevant blogs, work put out by EA orgs, etc. Specifically in the sense that if you stumble across and properly engage with one bit of it, e.g. an EA blog post on wild animal suffering, then there's a high (I'd guess?) chance you'll soon see a lot of other stuff too, like being aware of centralised infrastructure like the Forum and 80k advising, and becoming aware of the central ideas like cause prio and x-risk. Therefore maybe the virtual/physical distinction was a bit misleading, and the real distinction is more like "Schelling point for intellectual output / ideas" vs "Schelling point for meeting people".
That being said, a point that comes to mind is that geographic dispersion is one of the most annoying things for real-world Schelling points and totally absent* if you do it virtually, so maybe there's some perspective like "don't think about EAGx Virtual as recreating an EAG but virtually, but rather as a chance to create a meeting-people-Schelling-point without the traditional constraints, and maybe this ends up looking more ambitious"?
(*minus timezones, but you can mail people melatonin beforehand :) )
I mentioned the danger of bringing in people mostly driven by personal gain (though very briefly). I think your point about niche weirdo groups finding some types of coordination and trust very easy is underrated. As other posts point out the transition to positive personal incentives to do EA stuff is a new thing that will cause some problems, and it's unclear what to do about it (though as that post also says, "EA purity" tests are probably a bad idea).
I think the maximally-ambitious view of the EA Schelling point is one that attracts anyone who fits into the intersection of altruistic, ambitious / quantitative (in the sense of caring about the quantity of good done and wanting to make that big), and talented/competent in relevant ways. I think hardcore STEM weirdness becoming a defining EA feature (rather than just a hard-to-avoid incidental feature of a lot of it) would prevent achieving this.
In general, the wider the net you want to cast, the harder it is to become a clear Schelling point, both for cultural reasons (subgroup cultures tend more specific than their purpose strictly implies, and broad cultures tend to split), and for capacity reasons (it's harder to get many than few people to hear about something, and also simple practical things like big conferences costing more money and effort).
There is definitely an entire different post (or more) that could be written about how much and which parts of EA should be Schelling point or platform -type thing and comparing the pros and cons. In this post I don't even attempt to weigh this kind of choice.
I've now posted my entries on LessWrong:
I'd also like to really thank the judges for their feedback. It's a great luxury to be able to read many pages of thoughtful, probing questions about your work. I made several revisions & additions (and also split the entire thing into parts) in response to feedback, which I think improved the finished sequence a lot, and wish I had had the time to engage even more with the feedback.