I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
I agree with so much here.
I have my responses to the question you raised: "So why do I feel inclined to double down on effective altruism rather than move onto other endeavours?"
Hi, Pepijn here, co-founder of the Tien Procent Club (Dutch org that promotes effective giving) and a highly irregular reader of this forum.
Here's my quick take:
I think there's an opportunity to make much better EA videos simply by interviewing founders of the most effective non-profits. The medium is the message, and video lends itself perfectly for conveying emotions. It feels like there's a lot of room left to produce entertaining, exciting and high information density videos on effective non-profits.
Explanation:
I know there are some explainers ab...
I just sent out the Forum digest and I thought there was a higher number of underrated (and slightly unusual) posts this week, so I'm re-sharing some of them here:
Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."
I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"
I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and ...
I think the possibility that outreach to younger age groups[1] might be net negative is relatively neglected. That said, the two possible reasons suggested here didn't strike me as particularly conclusive.
The main reasons why I'm somewhat wary of outreach to younger ages (though there are certainly many considerations on both sides):
What is your AI Capabilities Red Line Personal Statement? It should read something like "when AI can do X in Y way, then I think we should be extremely worried / advocate for a Pause*".
I think it would be valuable if people started doing this; we can't feel when we're on an exponential, so its likely we will have powerful AI creep up on us.
@Greg_Colbourn just posted this and I have an intuition that people are going to read it and say "while it can do Y it still can't do X"
*in the case you think a Pause is ever optimal.
Some of my thoughts on funding.
It's giving season and I want to finally get around to publishing some of my thoughts and experiences around funding. I haven't written anything yet because I feel like I am mostly just revisiting painful experiences and will end up writing some angry rant. I have ideas for how things could be better so hopefully this can lead to positive change not just more complaining. All my experiences are in AI Safety.
On Timing: Certainty is more important than speed. The total decision time is less important than the overdue time. Expe...
Equal Hands is an experiment in democratizing effective giving. Donors simulate pooling their resources together, and voting how to distribute them across cause areas. All votes count equally, independent of someone's ability to give.
You can learn more about it here, and sign up to learn more or join here. If you sign up before December 16th, you can participate in our current round. As of December 7th, 2024 at 11:00pm Eastern time, 12 donors have pledged $2,915, meaning the marginal $25 donor will move ~$226 in expect...
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be?
In that form, the argument is naive and implausible...
Matthew Yglesias wrote a Giving Tuesday piece about GiveDirectly that makes a compelling case for effective giving to a general audience. The article addresses why one should consider directing charity to the Global South, what makes cash transfers an appealing intervention, and how this approach can be reconciled with the desire to volunteer locally.
https://www.slowboring.com/p/you-can-help-the-poorest-people-in
Sharing some planned Forum posts I'm considering, mostly as a commitment device, but welcome thoughts from others:
The very well written Notes on Effective Altruism coheres some thoughts I've had over the years, and makes me think we should potentially drop the "how to do good in the best way possible framing" when introducing EA for the "be more effective when trying to help others" framing. This honestly seems straightforwardly good to me from a number of different angles, and I think we should seriously be thinking about changing our overall branding to this as a tagline instead.
But am I missing something here? Is there a reason the latter is worse than I think? Or some hidden benefits to the former that I'm not weighing?
Yeah, I think the tension here is between finding a way to put the motivation that can appeal to all people, and watering it down a bit, or putting it fully in such a manner and accepting that you're only ever going to be speaking to a small portion of people.
Taking only the "most effective" path towards doing good, when that looks like working on top causes or donating a significant amount, just isn't open to 90% or more of the population. Is it really wise to focus a movement so narrowly that you rule out most people in the world being able to find a place in it?
Perhaps a compromise is something like the below, where "do more good" is the motto, but with an emphasis on how big that difference can be.
After following the Ukraine war closely for almost three years, I naturally also watch China's potential for military expansionism. Whereas past leaders of China talked about "forceful if necessary" reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn't how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country's communist pa...
Imperfect Parfit (written by by Daniel Kodsi and John Maier) is a fairly long review (by 2024 internet standards) of Parfit: A Philosopher and His Mission to Save Morality. It draws attention to some of his oddities and eccentricity (such as brushing his teeth for hours, or eating the same dinner every day (not unheard of among famous philosophers)). Considering Parfit's influence on the ideas that many of us involved in EA have, it seemed worth sharing here.
Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."
Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Obviously I can't speak for all of...
There are quite a few posts/some discussion on
The value of language learning for career capital
The dominance of English in EA and the advantages it confers
I expect these issues to become less important very soon as new AI-powered technology gets better. To an extent, the Babblefish is already here and nearly use...
I think that the phrase ["unaligned" AI] is too vague for a lot of safety research work.
I prefer keywords like:
- scheming
- naive
- deceptive
- overconfident
- uncooperative
I'm happy that the phrase "scheming" seems to have become popular recently, that's an issue that seems fairly specific to me. I have a much easier time imagining preventing an AI from successfully (intentionally) scheming than I do preventing it from being "unaligned."
Around EA Priorities:
Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").
If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.
I feel incredibly unsatisfied with the public EA dialogue around AI safety strategy now. From what I ...
I was curious how the "popularity" of the ITN factors has changed in EA recently. In short: Mentions of "importance" have become slightly more popular, and both "neglectedness" and "tractability" have become slightly less popular, by ~2-6 percentage points.
I don't think this method is strong enough to make conclusions, but it does track my perception of a vibe-shift towards considering importance more than the other two factors.
Searching the EA forum for the words importance/neglectedness/tractability (in quotation marks for exact matches) in the last year...
I like Austin Vernon's idea for scaling CO2 direct air capture to 40 billion tons per year, i.e. matching our current annual CO2 emissions, using (extreme versions of) well-understood industrial processes.
... (read more)