Quick takes

I like Austin Vernon's idea for scaling CO2 direct air capture to 40 billion tons per year, i.e. matching our current annual CO2 emissions, using (extreme versions of) well-understood industrial processes.  

The proposed solution may not be the cheapest out there. Other ideas like ocean seeding or olivine weathering might be less expensive. But most of the science is understood, and it can scale quickly. I'd guess 100,000 workers could build enough sites to capture our 40 billion tons goal in a decade. The capital expenditure rate would be between $1 t

... (read more)

I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.

EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:

... (read more)

I agree with so much here. 

I have my responses to the question you raised: "So why do I feel inclined to double down on effective altruism rather than move onto other endeavours?"

  • I have doubled down a lot over the last ~1.5 years. I am not at all shy about being an EA; it is even on my LinkedIn!
    • This is partly because of integrity and honesty reasons. Yes, I care about animals and AI and like math and rationality and whatnot. All this is a part of who I am.
    • Funnily enough, a non-negligible reason why I have doubled down (and am more pro-EA than before)
... (read more)

Hi, Pepijn here, co-founder of the Tien Procent Club (Dutch org that promotes effective giving) and a highly irregular reader of this forum.

Here's my quick take: 

I think there's an opportunity to make much better EA videos simply by interviewing founders of the most effective non-profits. The medium is the message, and video lends itself perfectly for conveying emotions. It feels like there's a lot of room left to produce entertaining, exciting and high information density videos on effective non-profits.

Explanation:
I know there are some explainers ab... (read more)

14
Eli Rose
Hey! I lead the GCRCB team at Open Philanthropy, which as part of our portfolio funds "meta EA" stuff (e.g. CEA). I like the high-level idea here (haven't thought through the details). We're happy to receive proposals like this for media communicating EA ideas and practices. Feel free to apply here, or if you have a more early-stage idea, feel free to DM me on here with a short description — no need for polish — and I'll get back to you with a quick take about whether it's something we might be interested in. : )

Related Q: is there a list of EA media project that you would like to see more of but ones that currently do not exist?

I just sent out the Forum digest and I thought there was a higher number of underrated (and slightly unusual) posts this week, so I'm re-sharing some of them here:

Thank you Will. Not the only person to point out the ineffectual title, so I have updated it to something a bit more clickbaity. I considered "chlorfenapyr DESTROYS malaria", but decided to tone it back a little.

I can't seem to find much EA discussion about [genetic modification to chickens to lessen suffering]. I think this naively seems like a promising area to me. I imagine others have investigated and decided against further work, I'm curious why. 

Showing 3 of 8 replies (Click to show all)
14
emre kaplan🔸
Lewis Bollard: "I agree with Ellen that legislation / corporate standards are more promising. I've asked if the breeders would accept $ to select on welfare, & the answer was no b/c it's inversely correlated w/ productivity & they can only select on ~2 traits/generation."

Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."

I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"

3
Charlie_Guthmann
adding on that wholefoods https://www.wholefoodsmarket.com/quality-standards/statement-on-broiler-chicken-welfare has made some commitments to switching breeds, we discussed this briefly at a Chicago EA meeting. I didn't get much info but they said that going and protesting/spreading the word to whole foods managers to switch breeds showed some success.  

I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and ... (read more)

Showing 3 of 10 replies (Click to show all)

I think the possibility that outreach to younger age groups[1] might be net negative is relatively neglected. That said, the two possible reasons suggested here didn't strike me as particularly conclusive.

The main reasons why I'm somewhat wary of outreach to younger ages (though there are certainly many considerations on both sides):

  • It seems quite plausible that people are less apt to adopt EA at younger ages because their thinking is 'less developed' in some relevant way that seems associated with interest in EA.
    • I think something related to but disti
... (read more)
3
Jamie_Harris
You highlight a couple of downsides. Far from all of the downsides of course, but none of the advantages either. I feel a bit sad to read this since I've worked on something related[1] to what you post about for years myself. And a bit confused why you posted this; do you think that you think EAs are underrating these two downsides? (If not, it just feels a bit unnecessarily disparaging to people trying their best to do good in the world.) Appreciate you highlighting your personal experience though; that's a useful anecdote.   1. ^ "Targeting of really young people" is certainly not the framing I would use; there's genuine demand for the services that we offer, as demonstrated by the tens of thousands of applications received across Leaf, Non-Trivial, Atlas, Pivotal, and SPARC/ESPR. But it's of course accurate in the sense that our target audience consists of (subsets of) young people.
2
Joey🔸
Hey Jamie, sorry my post made you feel bad. Indeed there are more nuances and it would be interesting to compile a more advanced pros and cons list on the topic of targeting younger folks. When AIM/me have thought about the pros and cons in deeper depth we tend to come out negative on it - specifically I do indeed think both value drift and flow through ecosystem effects to other parts of the movement are on average under-valubed by EAs. I wanted to call some attention to these two cons.

What is your AI Capabilities Red Line Personal Statement? It should read something like "when AI can do X in Y way, then I think we should be extremely worried / advocate for a Pause*". 

I think it would be valuable if people started doing this; we can't feel when we're on an exponential, so its likely we will have powerful AI creep up on us.

@Greg_Colbourn just posted this and I have an intuition that people are going to read it and say "while it can do Y it still can't do X"

*in the case you think a Pause is ever optimal.

Some of my thoughts on funding.

It's giving season and I want to finally get around to publishing some of my thoughts and experiences around funding. I haven't written anything yet because I feel like I am mostly just revisiting painful experiences and will end up writing some angry rant. I have ideas for how things could be better so hopefully this can lead to positive change not just more complaining. All my experiences are in AI Safety.

On Timing: Certainty is more important than speed. The total decision time is less important than the overdue time. Expe... (read more)

Showing 3 of 16 replies (Click to show all)
4
Habryka
Lightspeed Grants and the S-Process paid $20k honorariums to 5 evaluators. In addition, running the round probably cost around 8-ish months of Lightcone staff time, with a substantial chunk of that being my own time, which is generally at a premium as the CEO (I would value it organizationally at ~$700k/yr on the margin, with increasing marginal costs, though to be clear, my actual salary is currently $0), and then it also had some large diffuse effects on organizational attention. This makes me think it would be unsustainable for us to pick up running Lightspeed Grants rounds without something like ~$500k/yr of funding for it. We distributed around ~$10MM in the round we ran.
3
JJ Hepburn
I’m hesitant to ask you about this so feel free to pass. Can you say more about how it is that your current salary is $0? I think most people would be surprised you are not currently receiving a salary. I also assume that as a not-for-profit founder even when you have had a salary it is lower than most or all of your team.

I donate more to Lightcone than my salary, so it doesn't really make any sense for me to receive a salary, since that just means I pay more in taxes. 

I of course donate to Lightcone because Lightcone doesn't have enough money. 

Equal Hands — 2 Month Update

Equal Hands is an experiment in democratizing effective giving. Donors simulate pooling their resources together, and voting how to distribute them across cause areas. All votes count equally, independent of someone's ability to give.

You can learn more about it here, and sign up to learn more or join here. If you sign up before December 16th, you can participate in our current round. As of December 7th, 2024 at 11:00pm Eastern time, 12 donors have pledged $2,915, meaning the marginal $25 donor will move ~$226 in expect... (read more)

I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be?

In that form, the argument is naive and implausible... (read more)

Showing 3 of 11 replies (Click to show all)
6
Ryan Greenblatt
I think there are a bunch of meta effects from working in an object level job: * The object level work makes people more likely to enter the field as you note. (Though this doesn't just route through 80k and goes through a bunch of mechanisms.) * You'll probably have some conversations with people considering entering the field from a slightly more credible position at least if the object level stuff goes well. * Part of the work will likely involve fleshing stuff out so people with less context can more easily join/contribute. (True for most / many jobs.)
3
Chris Leong
Your AI timelines would likely be an important factor here.

Agree. If you think career switches take 18 months but timelines are 72 months then direct work is more important?

Matthew Yglesias wrote a Giving Tuesday piece about GiveDirectly that makes a compelling case for effective giving to a general audience. The article addresses why one should consider directing charity to the Global South, what makes cash transfers an appealing intervention, and how this approach can be reconciled with the desire to volunteer locally.

https://www.slowboring.com/p/you-can-help-the-poorest-people-in

Sharing some planned Forum posts I'm considering, mostly as a commitment device, but welcome thoughts from others:

  • I plan to add another post in my "EA EDA" sequence analysing Forum trends in 2024. My pre-registered prediction is that we'll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
  • I'll also try to do another end-of-year Forum awards post (see here for last year's) though with slightly different categories.
  • I'm working on an analysis of EA's po
... (read more)

The very well written Notes on Effective Altruism coheres some thoughts I've had over the years, and makes me think we should potentially drop the "how to do good in the best way possible framing" when introducing EA for the "be more effective when trying to help others" framing. This honestly seems straightforwardly good to me from a number of different angles, and I think we should seriously be thinking about changing our overall branding to this as a tagline instead. 

But am I missing something here? Is there a reason the latter is worse than I think? Or some hidden benefits to the former that I'm not weighing? 

6
GV
The context might vary and make me reconsider in certain instances, but I generally think it's important to say that there are ways to act that are orders of magnitude more effective than others. So yes, insist on "more" rather than on "the most possible"... But with an emphasis on the fact that there are resources to help you and guide you towards options that are likely to be immensely more impactful than most actions.
8
Ben Millwood🔸
I think there's a big difference between "more effective" and "most effective", and one of the most important and counterintuitive principles of EA is that trying to find the best option rather than just a good option can make a huge difference to how much good you do -- we have to prioritise between different goods, and this is painful to do (hence easy to avoid) but really important.

Yeah, I think the tension here is between finding a way to put the motivation that can appeal to all people, and watering it down a bit, or putting it fully in such a manner and accepting that you're only ever going to be speaking to a small portion of people.

Taking only the "most effective" path towards doing good, when that looks like working on top causes or donating a significant amount, just isn't open to 90% or more of the population. Is it really wise to focus a movement so narrowly that you rule out most people in the world being able to find a place in it?

Perhaps a compromise is something like the below, where "do more good" is the motto, but with an emphasis on how big that difference can be.

After following the Ukraine war closely for almost three years, I naturally also watch China's potential for military expansionism. Whereas past leaders of China talked about "forceful if necessary" reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn't how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country's communist pa... (read more)

Imperfect Parfit (written by by Daniel Kodsi and John Maier) is a fairly long review (by 2024 internet standards) of Parfit: A Philosopher and His Mission to Save Morality. It draws attention to some of his oddities and eccentricity (such as brushing his teeth for hours, or eating the same dinner every day (not unheard of among famous philosophers)). Considering Parfit's influence on the ideas that many of us involved in EA have, it seemed worth sharing here.

Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."

Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."

Obviously I can't speak for all of... (read more)

Showing 3 of 18 replies (Click to show all)
4
Vasco Grilo🔸
Hello Habryka. Could you link to a good overview of why taking loans does not make sense even if one thinks there is a high risk of human extinction soon? Daniel Kokotajlo said: I should also clarify that I am open to bets about less extreme events. For example, global unemployment rate doubling or population dropping below 7 billion in the next few years.

I do actually have trouble finding a good place to link to. I'll try to dig one up in the next few days.

2
Vasco Grilo🔸
Thanks for clarifying, Jason. I think people like me proposing public bets to whoever has extreme views or asking them whether they have considered loans should be transparent about their views. In contrast, fraude is "the crime of obtaining money or property by deceiving people".

There are quite a few posts/some discussion on

  1. The value of language learning for career capital

  2. The dominance of English in EA and the advantages it confers

See., e.g., https://forum.effectivealtruism.org/posts/qf6pGhm9a7vTMFLtc/english-as-a-dominant-language-in-the-movement-challenges

https://forum.effectivealtruism.org/posts/k7igqbN52XtmJGBZ8/effective-language-learning-for-effective-altruists

I expect these issues to become less important very soon as new AI-powered technology gets better. To an extent, the Babblefish is already here and nearly use... (read more)

I think that the phrase ["unaligned" AI] is too vague for a lot of safety research work.

I prefer keywords like:
- scheming 
- naive
- deceptive
- overconfident
- uncooperative

I'm happy that the phrase "scheming" seems to have become popular recently, that's an issue that seems fairly specific to me. I have a much easier time imagining preventing an AI from successfully (intentionally) scheming than I do preventing it from being "unaligned."

2
Ian Turner
Hmm, I would argue than an AI which, when asked, causes human extinction is not aligned, even if it did exactly what it was told.

Yea, I think I'd classify that as a different thing. I see alignment typically as a "mistake" issue, rather than as a "misuse" issue. I think others here often use the phrase similarly. 

Around EA Priorities:

Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").

If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.

I feel incredibly unsatisfied with the public EA dialogue around AI safety strategy now. From what I ... (read more)

Showing 3 of 7 replies (Click to show all)
19
Peter Favaloro
Hi Ozzie – Peter Favaloro here; I do grantmaking on technical AI safety at Open Philanthropy. Thanks for this post, I enjoyed it. I want to react to this quote: …it seems like OP has provided very mixed messages around AI safety. They've provided surprisingly little funding / support for technical AI safety in the last few years (perhaps 1 full-time grantmaker?) I agree that over the past year or two our grantmaking in technical AI safety (TAIS) has been too bottlenecked by our grantmaking capacity, which in turn has been bottlenecked in part by our ability to hire technical grantmakers. (Though also, when we've tried to collect information on what opportunities we're missing out on, we’ve been somewhat surprised at how few excellent, shovel-ready TAIS grants we’ve found.) Over the past few months I’ve been setting up a new TAIS grantmaking team, to supplement Ajeya’s grantmaking. We’ve hired some great junior grantmakers and expect to publish an open call for applications in the next few months. After that we’ll likely try to hire more grantmakers. So stay tuned!

That sounds exciting, thanks for the update. Good luck with team building and grantmaking!

8
Ozzie Gooen
That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP.  For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post While a bunch of these salaries are on the high side, not all of them are.

I was curious how the "popularity" of the ITN factors has changed in EA recently. In short: Mentions of "importance" have become slightly more popular, and both "neglectedness" and "tractability" have become slightly less popular, by ~2-6 percentage points.

I don't think this method is strong enough to make conclusions, but it does track my perception of a vibe-shift towards considering importance more than the other two factors.

Searching the EA forum for the words importance/neglectedness/tractability (in quotation marks for exact matches) in the last year... (read more)

Load more