I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be?
In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts:
* Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects.
* OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing.
* One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy?
* I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever.
* Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
I'm the co-founder and one of the main organizers of EA Purdue. Last fall, we got four signups for our intro seminar; this fall, we got around fifty. Here's what's changed over the last year:
* We got officially registered with our university. Last year, we were an unregistered student organization, and as a result lacked access to opportunities like the club fair and were not listed on the official Purdue extracurriculars website. After going through the registration process, we were able to take advantage of these opportunities.
* We tabled at club fairs. Last year, we did not attend club fairs, since we weren't yet eligible for them. This year, we were eligible and attended, and we added around 100 people to our mailing list and GroupMe. This is probably the most directly impactful change we made.
* We had a seminar sign-up QR code at the club fairs. This item actually changed between the club fairs, since we were a bit slow to get the seminar sign-up form created. A majority of our sign-ups came from the one club fair where we had the QR code, despite the other club fair being ~10-50x larger.
* We held our callout meeting earlier. Last year, I delayed the first intro talk meeting until the middle of the third week of school, long after most clubs finished their callouts. This led to around 10 people showing up, which was still more than I expected, but not as much as I had hoped. This year, we held the callout early the second week of school, and ended up getting around 30-35 attendees. We also gave those attendees time to fill out the seminar sign-up form at the callout, and this accounted for most of the rest of our sign-ups.
* We brought food to the callout. People are more likely to attend meetings at universities if there is food, especially if they're busy and can skip a long dining court line by listening to your intro talk. I highly recommend bringing food to your regular meetings too - attendance at our general meetings doubled last year after I s
How tractable is improving (moral) philosophy education in high schools?
tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week "Christian Religion" was the default for everyone, in which we spent most of the time interpreting stories from the bible, most of which to me felt pretty irrelevant to the present. This was in 2012 in Germany, a country with more atheists than Christians as of 2023, and even in 2012 my best guess is that <20% of my classmates were practicing a religion.
Only in grade 10, we got the option to switch to secular Ethics classes instead, which only <10% of the students did (Religion was considered less work).
Ethics class quickly became one of my favorite classes. For the first time in my life I had a regular group of people equally interested in discussing Vegetarianism and other such questions (almost everyone in my school ate meat, and vegetarians were sometimes made fun of). Still, the curriculum wasn't great, we spent too much time with ancient Greek philosophers and very little time discussing moral philosophy topics relevant to the present.
How have your experiences been in high school? I'm especially curious about more recent experiences.
Are there tractable ways to improve the situation? Has anyone researched this?
1) Could we get ethics classes in the mandatory/default curriculum in more schools? Which countries or states seem best for that? In Germany, education is state-regulated - which German state might be most open to this? Hamburg? Berlin?
2) Is there a shortage in ethics teachers (compared to religion teachers)? Can we
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune.
Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take!
https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way.
Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing.
It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes.
I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it.
EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again.
EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.
I had written up what I learned as a Manifund micrograntor a few months ago, but have never gotten around to polishing that for publication. Still, I think those reactions could be useful for people in the EA Community Choice program now. You've got the same basic pattern of a bunch of inexperienced grantmakers with a few hundred bucks to spend each and ~40-50 projects to look at quickly. I'm going to post those without much editing, since the program is fairly short. A few points are specific to the types of proposals that were in the microgranting experiment (which came from the ACX program).
General Feedback for Grant Applicants [from ACX Microgrants Experience]
Caution: This feedback is based on a single micrograntor's experience. It may be much less applicable to other contexts -- e.g., those involving larger grantors, grantors who do not need to evaluate a large number of proposals in a limited amount of time, or grantors who are in a position to fund a significant percentage of grants reviewed. I had pre-committed to myself that I would look at every single proposal unless the title convinced me that it was way too technical for me to understand. This probably affected my experience, and was done more for educational / information value reasons than anything else.
* If you have a longer proposal, please start with an executive summary, limited to ~ 300 words. You may get only 2-3 minutes on an initial screen, maybe even less.
* After getting a sense of the basic contours of the proposal, I found myself with a decent sense of where the weaker points probably were and wanted to see if these were clear dealbreakers in an efficient manner. Please be sure to red-team your proposal and address the weak points!
* Use shorter paragraphs with titles or other clear, skimmable signals. As per above, I need to be able to quickly find your discussion on specific points.
* One recurrent weakness involved an unclear theory of impact that I had to infer from the propo
GET AMBITIOUS SLOWLY
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
* try anyway and fail badly, probably too badly for it to even be an educational failure.
* fake it, probably without knowing they're doing so
* learned helplessness, possible systemic depression
* be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.
* discover more skills than they knew. feel great, accomplish great things, learn a lot.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures.
I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago
What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules:
* stick to problems where failure will at least be informative. If you can't track reality well eno