Peter Wildeford

Chief Advisory Executive @ IAPS
19481 karmaJoined Working (6-15 years)Washington, DC, USA
www.twitter.com/peterwildeford

Bio

I'm a former data scientist with 5 years industry experience now working in Washington DC to bridge the gap between policy and emerging technology. AI is moving very quickly and we need to help the government keep up!

I work at IAPS, a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.

I'm also a professional forecaster with specializations in geopolitics and electoral forecasting.

Posts
83

Sorted by New

Comments
1769

Topic contributions
1

I'm very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.

I also agree it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there's higher-level strategic issues that make the picture very difficult to ascertain even with a lot of relevant information (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements). Also the private information asymmetry looms large here.

I also agree that "work that aims to get AI companies to commit towards not committing animal mistreatment" is an interesting and incredibly underexplored area. I think this is likely worth funding if you're knowledgable about the space (I'm not) and know of good opportunities (I currently don't).

I do think risk aversion is underrated as a reasonable donor attitude and does make the case for focusing on neglected animals stronger.

Since it looks like you're looking for an opinion, here's mine:

To start, while I deeply respect GiveWell's work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you're planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. ...Additionally, I don't think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).

Instead, I think the main difficult judgement call in EA cause prioritization right now is "neglected animals" (eg invertebrates, wild animals) versus AI risk reduction.

AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).

This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.

To be clear, AI orgs still do need money. I think there's a vibe that all the AI organizations that can be funded by OpenPhil are fully funded and thus AI donations are not attractive to individual EA forum donors. This is not true. I agree that their highest priority parts are fully funded and thus the marginal cost-effectiveness of donations is reduced. But this marginal cost-effectiveness is not eliminated, and it still can be high. I think there are quite a few AI orgs that are still primarily limited by money and would do great things with more funding. Additionally it's not healthy for these orgs to be so heavily reliant on OpenPhil support.

So my overall guess is if you think AI is only 10x or less important in the abstract than work on neglected animals, you should donate to the neglected animals due to this diminishing marginal returns issue.

I currently lean a bit towards AI is >10x neglected animals and therefore I want to donate to AI stuff, but I really don't think this is settled, it needs more research, and it's very reasonable to believe the other way.

~

Ok so where to donate? I don't have a good systematic take in either the animal space or the AI space unfortunately, but here's a shot:

For starters, in the AI space, a big issue for individual donors is that unfortunately it's very hard to properly evaluate AI organizations without a large stack of private information that is hard to come by. This private info has greatly changed my view of what organizations are good in the AI space. On the other hand you can basically evaluate animal orgs well enough with only public info, and the private info only improves the eval a little bit.

Moreover, in the neglected animal space, I do basically trust the EA Animal Welfare Fund to allocate money well and think it could be hard for an individual to outperform that. Shrimp Welfare Project also looks compelling.

I think the LTFF is worth donating to but to be clear I don't think the LTFF actually does all-considered work on the topic - they seem to have an important segment of expertise that seems neglected outside the LTFF, but they definitely don't have the expertise to cover and evaluate everything. I do think the LTFF would be a worthy donation choice.

If I were making a recommendation I would concur with the recommend the three AI orgs in OpenPhil's list: Horizon, ARI, and CLTR -- they are all being recommended by individual OpenPhil staff for good reason.

There are several other orgs I think are worth considering as well and you may want to think about options that are only available to you as an individual, such as political donations. Or think about ways where OpenPhil may not be able to do as well in the AI space, like PauseAI or digital sentience work, both of which still look neglected.

~

A few caveats/exceptions to my above comment:

  • I'm very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year.

  • I'm not shilling for my own orgs in this comment to keep it less biased, but those are also options.

  • I don't mean to be mean to GiveWell. Of course donating to GiveWell is very good and still better than 99.99% of charitable giving!

  • Another area I don't consider but probably should is organizations like Giving What We Can that work somewhat outside these cause areas but may have sufficient multipliers that it still is very cost-effective. I think meta-work on top of global health and development work (such as improving its effectiveness or getting more people to like it / do it better) can often lead to larger multipliers since there's magnitudes more underlying money in that area + interest in the first place.

  • I don't appropriately focus on digital sentience, which OpenPhil is also not doing and could also use some help. I think this could be fairly neglected. Work that aims to get AI companies to commit towards not committing animal mistreatment is also an interesting and incredibly underexplored area that I don't know much about.

  • There's a sizable amount of meta-strategic disagreement / uncertainty within the AI space that I gloss over here (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements with his conclusions).

  • I do think risk aversion is underrated as a reasonable donor attitude that can vary between donors and does make the case for focusing on neglected animals stronger. I don't think there's an accurate and objective answer about how risk averse you ought to be.

I do agree the EA Funds made a mistake not returning to fixed grant rounds after the Mega Money era was over. It's so much easier to organize, coordinate, and compare.

Thanks for the comment, I think this is very astute.

~

Recently it seems like the community on the EA Forum has shifted a bit to favor animal welfare. Or maybe it's just that the AI safety people have migrated to other blogs and organizations.

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

I don't think that all AI safety orgs are actually fully funded since there are orgs that OP cannot fund for reasons (see Trevor's post and also OP's individual recommendations in AI) other than cost-effectiveness and also OP cannot and should not fund 100% of every org (it's not sustainable for orgs to have just one mega-funder; see also what Abraham mentioned here). Also there is room for contrarian donation takes like Michael Dickens's.

I basically endorse this post, as well as the use of the tools created by Rethink Priorities that collectively point to quite strong but not overwhelming confidence in the marginal value of farmed animal welfare.

I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.

I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).

I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn't do operations work unless they clearly fail at research first.

I've personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).

I really appreciate these dates being announced in advance - it makes it much easier to plan!

I'm not sure I understand the expectations enough about what these questions are looking for to answer.

Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.

Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how do we reduce AI risk?" was "I don't know, I guess we should urgently figure that out" and now there's been an explosion of analysis, threat modeling, and policy ideas - for example Luke's 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there's way more too.

Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there's been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities's thinking on which funders to work for and why.

I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.

As for new effective organizations, a bit of this depends on your opinions about what is "effective" and to what extent new organizations are "EA", but there are many new initiatives around, especially in the AI space.

Answer by Peter Wildeford174
33
1
1
4

It's very difficult to underrate how much EA has changed over the past two years.

For context, two years ago was 2022 July 30. It was 17 days prior to the "What We Owe the Future" book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.

It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and intense interest in AI risk.

I think these two events were really key changes for the EA movement and led to a huge vibe shift. "Longtermism" feels very antiquated now and feels abandoned in the name of "holy crap we have to deal with AI risk occurring within the next ten years". Big Money is out, but we still have a lot of money, and it feels more responsible and somewhat more sustainable now. There are no longer regrantors running around everywhere, for better and for worse.

Many of the people previously working on longtermism have pivoted to "pandemics and AI" and many of the people previously working on pandemic risk have pivoted to "AI x bio intersections". WWOTF captures the current mid-2024 vibe of EA much less than Leopold's "Situational Awareness".

There also has been a massive pivot towards mainstream engagement. Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being "EA-adjacent". These people now take meetings in DC and engage in the mainstream policy process (whereas previously "politics was the mindkiller"). Many AI policy orgs have popped up or become more prominent as a result. Even MIRI, which had just announced "Death with Dignity" only about three months prior to that date of 2022 July 30, has now given up on giving up and pivoted to policy work. DC is a much bigger EA hub than it was two years ago, but the people working in DC certainly wouldn't refer to it as that.

The vibe shift towards AI has also continued to cannibalize the rest of EA as well, for better and for worse. This trend was already in full swing in 2022 but became much more prominent over 2023-2024. There's a lot less money available for global health and animal welfare work than before, especially if you worked on more weird stuff like shrimp. Shrimp welfare kinda peaked in 2022 and the past two years have unfortunately not been kind to shrimp.

Load more