Hi everyone! I’ll be doing an Ask Me Anything (AMA) here. Feel free to drop your questions in the comments below. I will aim to answer them by Monday, July 24.
Who am I?
I’m Peter. I co-founded Rethink Priorities (RP) with Marcus A. Davis in 2018. Previously, I worked as a data scientist in industry for five years. I’m an avid forecaster. I’ve been known to tweet here and blog here.
What does Rethink Priorities do?
RP is a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.
We focus on:
- Wild and farmed animal welfare (including invertebrate welfare)
- Global health and development (including climate change)
- AI governance and strategy
- Existential security and safeguarding a flourishing long-term future
- Understanding and supporting communities relevant to the above
What should you ask me?
Anything!
I oversee RP’s work related to existential security, AI, and surveys and data analysis research, but I can answer any question about RP (or anything).
I’m also excited to answer questions about the organization’s future plans and our funding gaps (see here for more information). We're pretty funding constrained right now and could use some help!
We also recently published a personal reflection on what Marcus and I have learned in the last five years as well as a review of the organization’s impacts, future plans, and funding needs that you might be interested in or have questions about.
RP’s publicly available research can be found in this database. If you’d like to support RP’s mission, please donate here or contact Director of Development Janique Behman.
To stay up-to-date on our work, please subscribe to our newsletter or engage with us on Twitter, Facebook, or LinkedIn.
Doing some napkin-math:
That seems like a lot! Maybe I should discount a bit as some of this might be for the new Special Projects team rather than research, but it still seems like it'll be over $100k per research output.
Related questions:
- Do you think the calculations above are broadly correct? If not, could you share what the ballpark figures might actually be? Obviously, this will depend a lot on the size of the project and other factors but averages are still useful!
- If they are correct, how come this number is so high? Is it just due to multiple researchers spending a lot of time per report and making sure it's extremely high-quality? FWIW I think the value of some RP projects is very high - and worth more than the costs above - but I'm still surprised at the costs.
- Is the cost something you're assessing when you decide whether to take on a research project (when it'
... (read more)Hi James,
Thanks for your thoughtful question, but I think you’re thinking about this incorrectly for a few reasons:
Firstly, while we raised $10.7M, most of that was earmarked for 2023 as we usually raise money in the current year for the following year. In 2022, we spent around $6.8M on RP core programs, not including special projects and operations to support special projects.
Secondly, we actually have published less than half of our 2022 research. My rough guess is that in 2022 we produced over 100 pieces of work, not ~64 as you estimate. This is for two reasons:
Some research is confidential for whatever reason and is never intended to be published
Some research is intended to be published but we haven’t had the resources or time to publish it yet because public outputs are not a priority for our clients and their funding does not cover it (this is actually something we’d love to get money from the EA public for).
To give a more clear substitute figure, we generally say that $20K-$40K pays for a typical short-term research project and $70K-$100K pays for a typical in-depth research project.
But more importantly I'd add that counting outputs per dollar is not a good way to v... (read more)
Love the question
Relatedly, how much of the funding (both for 2022 and for 2024) is for the production of research outputs, compared to how much it is for other operations (like fiscal sponsorships or incubation)?
I think for marginal donations on RP, perhaps the best way to think about this would be in the cost to produce marginal research. A new researcher hire would cost ~$87K in salary (median, there is of course variation by title level here) and ~$28K in other costs (e.g., taxes, employment fees, benefits, equipment, employee travel). We then need to spend ~$31K in marginal spending on operations and ~$28K in marginal spending on management to support a new researcher. So the total cost for one new FTE year of research ends up being ~$174K. I think if you want to get a sense of how much it costs to support research at RP and how that balances between operations and other costs, this is a useful breakdown to look at.
In addition to research and operations, I’d say we produce roughly four other categories of things: fiscal sponsorship, incubated organizations, internal events, and external conferences. Let me go into a bit of detail about that:
Fiscal sponsorship arrangements pay for themselves out of the sponsored org’s budget, so they’re not something we’d seek public funding for.
Incubation work, or work to produce and advise new organizations based on our research (e.g., Condor Ca
I’m guessing what you mean is something like “One of RP’s aims is to advise grantmaking. How many total dollars of grantmaking have you advised?” You might then be tempted to take this number, divide it by our costs, and compare that to other organizations. But this is a tricky question to answer actually, since it never has been as straightforward of a relationship as I’d expect for a few reasons:
Our advice is marginal and we never make a sole and final decision on any grant. Also the amount of contribution varies a lot between grants. So you need some counterfactually-adjusted marginal figure.
Sometimes our advice leads to grantmakers being less likely to make a grant rather than more likely… how does that count?
The impact value of the grants themselves is not equal.
Some of our research work looks into decisions but doesn’t actually change the answer. For example, we look into an area that we think isn’t promising and confirm it isn’t promising so in absolute terms we got nowhere but the hits-based fact that it could’ve gone somewhere is valuable. It’s hard to figure out how to quantify this value.
A large portion of our research builds on itself. For example, our in
Why does it make sense for Rethink Priorities to host research related to all five of the listed focus areas within one research org? It seems like they have little in common (other than, I guess, all being popular EA topics)?
We spoke a little at EAG London about how people underestimate the mental health challenges people face in EA, especially among the most senior people. You indicated a willingness to talk about it publicly. If you're still up for it, could you tell us more about your own personal mental health over the past few years and your perceptions of what mental health is like amongst other effective altruists in leadership positions?
It was an AMA similar to this one where Will Macaskill revealed that he took antidepressant medication and that actually had a large impact on me. I have historically struggled with anxiety and depression and Will’s response contributed a large portion of the reason why I chose to ask my doctor about SSRIs in 2019. Luckily they worked and hopefully by sharing my experience I can pay this forward.
Howie Lempel has also been very open about his experience. I think mental health concerns are common among EA “leaders” and I think they have been pretty open about it. I hope that continues and we could always use more.
I have been lucky to find antidepressants, talk therapy, regular exercise, and proactively engaging with a supportive friend group to be a great combination to alleviate the ways in which anxiety would otherwise derail my day. I encourage other people suffering from these conditions to explore these options.
Anxiety and depression will still be a lifelong struggle for me. Even with all of this there are still a few days a year where I am so anxious and depressed that I sleep for sixteen hours and barely get out of bed. But it’s much less worse due to me being lucky enough to have effective treatment.
RP seems to have a somewhat unique view among research organisations in identifying a funding gap rather than a talent gap for research staff. I would be very curious why you think this is the case and how you have solved the talent constraints.
I disagree; last I checked most AI safety research orgs think they could make more good hires with more money and see themselves as funding-constrained-- at least all 4 that I'm familiar with: RP, GovAI, FAR, and AI Impacts.
Edit: also see the recent Alignment Grantmaking is Funding-Limited Right Now (note that most alignment funding on the margin goes to paying and supporting researchers, in the general sense of the word).
What’s are some questions you hope someone’s gonna ask that seem relatively unlikely to get asked organically?
Bonus: what are the answers to those questions?
Is RP research donor-driven in terms of priorities? Do you worry that Rethink could become vastly more focused on some cause areas over others due to available funding in the space, as opposed to more neglected areas that could be more impactful?
Aside from RP, what is your best guess for the org that is morally best to give money to?
I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.
In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI risk group that seems to have important work and would be able to spend more marginal money now.
As Co-CEO of RP, I am obligated to say that our AI Governance and Strategy Department is doing this work and is actively seeking funding. Our work on Existential Security and doing surveys is also very focused on AI and are also funding constrained. You can donate to RP here.
…but given that you asked me specifically for non-RP work here is my ranked list of remaining organizations:
- Centre for Long-Term Resilience (CLTR) does excellent work and appears to me to be exceptionally well-positioned and well-connected to meet the large
... (read more)Have you considered doing an Animal Charity Evaluators review? I personally think Rethink puts out some of the most important animal-related research out there!
Thanks for the compliment! We have considered a few times but ultimately have declined to take the opportunity to review due to:
There are capacity limitations on our end.
We have concerns around how Rethink Priorities would be viewed by ACE’s audience given that we do a lot of research work in many different areas.
We like the opportunity to be constructively critical of ACE’s research work and like that they are also willing to challenge and push back on our research work. We are concerned this dynamic might get complicated if we are in a clear reviewer-reviewee relationship.
We do work with ACE a lot and are excited to continue to work with them. We'd definitely consider doing an ACE review in future years if invited. We also hope that fans of our work will consider supporting us financially even if we don't have an ACE top charity designation!
What is some RP research that you think was extremely important or view-changing but got relatively little attention from the EA community or relevant stakeholders?
What are some of your proudest 'impact stories' from RP's research? E.g. you did research on insects and now X funders will dedicate $Y million to insect welfare
Are there any notable differences in your ability to have impact in the different areas you conduct research? E.g. one area where important novel insights are easier / harder, or one area where relevant research is more easily translated into practice
Yes. I think animal welfare remains incredibly understudied and thus it is easier to have a novel insight, but also there is less literature to draw from and you can end up more fundamentally clueless. Whereas in global health and development work there is much more research to draw from, which makes it nicer to be able to do literature reviews to turn existing studies and evidence into grant recommendations, but also means that a lot of the low-hanging fruit has been done already.
Similarly, there is a lot more money available to chase top global health interventions relative to animal welfare or x-risk work, but it is also comparably harder to improve recommendations as a lot of the recommendations are already pretty well-known by foundations and policymakers.
AI has been an especially interesting place to work in because it has been rapidly mainstreaming this year. Previously, there was not much to draw on but now there is much more to draw from and many more people are open to being advised on work in the area. However, there are also many more people trying to get involved and work is being produced at a very rapid pace, which can make it harder to keep up and harder to contribute.
Hi everyone! I'm sorry I didn't get to all the questions today - it was more work than I anticipated to put together. I will answer more tomorrow and I will keep going until everything has an answer!
Re existential security, what are your AGI timelines and p(doom|AGI) like, and do you support efforts calling for a global moratorium on AGI (to allow time for alignment research to catch up / establish the possibility of alignment of superintelligent AI)?
As for existential risk, my current very tentative forecast is that the world state at the end of 2100 to look something like:
73% - the world in 2100 looks broadly like it does now (in 2023) in the same sense that the current 2023 world looks broadly like it did in 1946. That is to say of course there will be a lot of technological and sociological change between now and then but by the end of 2100 there still won't be unprecedented explosive economic growth (e.g.., >30% GWP growth per year), no existential disaster, etc.
9% - the world is in a singleton state controlled by an unaligned rogue AI acting on its own initiative.
6% - the future is good for humans but our AI / post-AI society causes some other moral disaster (e.g., widespread abuse of digital minds, widespread factory farming)
5% - we get aligned AI, solve the time of perils, and have a really great future
4% - the world is in a singleton state controlled by an AI-enabled dictatorship that was initiated by some human actor misusing AI intentionally
1% - all humans are extinct due to an unaligned rogue AI acting on its own initiative
2% - all humans are extinct due to something else on this list (e.g., some ot... (read more)
I have trouble understanding what “AGI” specifically refers to and I don’t think it’s the best way to think about risks from AI. As you may know, in addition to being co-CEO at Rethink Priorities, I take forecasting seriously as a hobby and people actually for some reason pay me to forecast, making me a professional forecaster. So I think a lot in terms of concrete resolution criteria for forecasting questions and my thinking on these questions has actually been meaningfully bottlenecked right now by not knowing what those concrete resolution criteria are.
That being said, being a good thinker also involves having to figure out how to operate in some sort of undefined grey space, and so I should be at least somewhat comfortable enough with compute trends, algorithmic progress, etc. to be able to give some sort of answer. And so I think for the type of AI that I struggle to define but am worried about – the kind that has the capability of autonomously causing existential risk – the kind of AI that AI researcher Caroline Jeanmaire refers to as the “minimal menace” – I am willing to tentatively put the following distribution on t... (read more)
Good to see that you think the ideas should be explored. I think a global moratorium is becoming more feasible, given the UN Security Council meeting on AI, The UK Summit, the Statement on AI risk, public campaigns etc.
Re compute overhang, I don't think this is a defeater. We need the moratorium to be indefinite, and only lifted when there is a global consensus on an alignment solution (and perhaps even a global referendum on pressing go on more powerful foundation models).
This makes sense given your timelines and p(doom) outlined above. But I urge you (and others reading) to reconsider the level of danger we are now in[1].
Or, ahem, to rethink your priorities (sorry).
What are your thoughts, for you personally, around...
I) Time spent
II) Joy of use
III) Value of information gained
of Manifold vs Metaculus?
I use both Manifold and Metaculus every day and it’s not really clear to me which I spend time on more. The answer is “a lot” to both.
For joy of use, I think Manifold has worked hard to make the forecasting process very seamless and I like that. I also like the gamification of the mana profit system. That being said, I think the questions on Metaculus tend to be more interesting. I personally like having rigorous resolution criteria and I personally prefer being able to give my true probabilities rather than bet up or down. So Metaculus might suit my personality better.
Surprisingly I don’t really have a clear read which platform is more accurate. So I think the value of information is optimized by using both platforms. I’m keen to see this researched more.
By answering this question I should disclose that Metaculus pays me money for being a forecaster. I suppose Manifold also indirectly pays me money because RP is part of their Manifold for Charity program. So my feelings towards them are not exactly unbiased.
You said in your "Five years" post that you are planning to do more self-eval and impact assessments, and I strongly encourage this. What are the most realistic bits of evidence you could get from an impact report of Rethink Priorities which would cause you to dramatically update your strategy? (or, another generator: what are you most worried about learning from such assessments?)
What do you think the ideal ratio in terms of resource allocation between thinking/research and doing/action in EA would be? (I recognize those categories are ill-defined, and some activities won't comfortably fall into either bucket. But they seem discrete enough to make a question about balancing different kinds of work worthwhile.)
Rethink feels unique among EA orgs - it's large, not attached to a university, not a foundation. Why aren't there more standalone research shops? Should there be?
RP’s arrangement here is definitely not unique to EA, though I do agree we may be the largest EA-affiliated non-univeristy non-foundation research organization, as my guess is we are a little larger than GiveWell by FTE headcount. Though adding all those caveats ends up with me not saying very much, kinda like talking about being the largest private Catholic university in Vermont.
I think university affiliations definitely matter, especially for getting your work in front of policymakers. My guess is that research organizations choose to affiliate with a university when they can for this reason, and it’s a good one.
But I also like not having to worry about the bureaucracy that comes with interfacing with a university and I think this has historically allowed RP to be more agile and grow faster. I think it’s important that EA have both university and non-university research organizations.
(Obviously everyone would love to be attached to a multi-billion dollar foundation and if we can get more of those we obviously should, but I assume that’s not really an option.)
Hi Peter, thanks for your work. I have several questions:
We do broadly aim to maximize the cost-effectiveness of our research work and so we focus on allocating money to opportunities that we think are most cost-effective on the margin.
Given that, it may be surprising that we work in multiple cause areas, but we face some interesting constraints and considerations:
There is significant uncertainty about which priority area is most impactful. The general approach to RP has been that we can scale up multiple high-quality research teams in a variety of cause areas easier than we can figure out which cause area we ought to prioritize. Though we recently hired a Worldview Investigations Team to work a lot more on the broader question of how to allocate an EA portfolio. We also are investing a lot more in our own impact assessment. Together we hope that these will give us more insights into how to allocate our work going forward.
There may be diminishing returns to RP focusing on any one priority area.
A large amount of resources are not fungible across these different areas. The marginal opportunity cost to taking res
I’m not exactly sure and I think you’d have to ask some other smaller organizations. My best guess is that scaling organizations is genuinely hard and risky, and I can understand other organizations may feel that they work best and are more comfortable with being small. I think RP has been different by:
Working in multiple different cause areas lets us tap into multiple different funding sources, thus increasing the amount of money we would take. It also increased the amount of work we wanted to do and the amount of people we wanted to hire.
By being 100% remote-first from the beginning, we had a much larger talent pool to tap into. I think we’ve also been more willing to take chances on more junior-level researchers which has also broadened our talent pool. This allowed us to hire more.
I think just a general willingness and aspiration to be a big research organization and take on this risk, rather than intentionally go it slow.
How has your experience as co-CEO been? How do you share responsibilities? Would you recommend it to other orgs?
I’ve personally liked it. There have been several times when I’ve talked with my co-CEO Marcus about whether one of us should just become CEO and it’s never really made sense. We work well together and the co-CEO dynamic creates a great balance between our pros and cons as leaders – Marcus leads the organization to be more deliberate and careful at the cost of potentially going too slowly and I lead the organization to be more visionary at the cost of potentially being too chaotic.
Right now we split the organization very well where Marcus handles the portfolios pertaining to Global Health and Development, Animal Welfare, and Worldview Investigations… and I handle the portfolios pertaining to AI Governance and Strategy, Existential Security (AI-focused incubation), and Surveys and Data Analysis (currently also mostly AI policy focused right now though you may know us mainly from the EA Survey).
I’m unsure if I’d recommend it to other orgs. I think most times it wouldn’t make sense. But I think it does make sense when there are two co-founders with an equally natural claim and desire to claim the CEO mantle, when they balance each other well, and when there is some sort of clear split and division of responsibility.
What's something about you that might surprise people who only know your public, "professional EA" persona?
There is a sense that the journal system is obviously flawed and could be trivially improved. Why hasn't EA done this?
We publish lots of material, we have lots of resources. It seems possible to imagine building a few journals that run in a different way.
And even if others don't respect them, if EA orgs did and they were less onerous to publish to I imagine outsiders would start to.
I haven’t actually thought much about the academic journal system, though I’m interested in what David Reinstein (former RP staff member) has been doing with his Unjournal.
Peter is one of the best people I know well. He is kind, empathetic, wise, hard working, well-calibrated, to name a few. Generally I want to be more like someone along one axis, whereas I wish I were more like Peter in many. I know that his character has been developed with work and over time so I'd like to commend him for this. And thank him for his hard work and the outputs of it.
I guess that to the reader I'd say that Peter is good in ways you can see, but also as good in many ways that you can't - he gives good advice, he provides insight o... (read more)
I think this is a particularly good piece by Peter, though I am crying reading it. https://www.pasteurscube.com/for-samantha-a-eulogy/
Squiggle vs squigglepy?
(1) where do you think forecasting has its best use-cases? where do you think forecasting doesn't help, or could hurt?
interested in your answer both as the co-CEO of an organization making important decisions, and an avid forecaster yourself.
(2) what are RP's plans with the Special Projects Program?
Do you think that promoting alternative proteins is (by far) the most tractable way to make conventional animal agriculture obsolete?
Do you think increasing public funding and support for alternative proteins is the most pressing challenge facing the industry?
Do you think there is expert consensus on these questions?
Dear Mr. Wildeford,
To what extent your work depends in your own staff vs. the academic EA infraestructure?
There are organizations as "Effective Thesis" that try to re-direct academic resources for EA research. Do you have any relation with those organizations? Is there any way for external collaboration with your organization? Could you elaborate your vision of how "in house" and "external research" shall be optimally combined in Rethink Priorities?
Thank you very much for you excellent work.
Kind regards,
Arturo