We are excited to release the Global Priorities Institute’s new research agendas. 

GPI’s previous agenda integrated discussion of research priorities in both economics and philosophy. In contrast, we now have distinct agendas for each of our three core research areas: philosophy, economics, and psychology.  

The new philosophy agenda has four sections. Section 1 covers ethical questions relating to the long-term future. Section 2 discusses issues in the philosophy of mind and wellbeing, with a special focus on nonhuman candidates for moral status (like non-human animals and digital minds). Section 3 discusses work exploring the risks and opportunities posed by advanced artificial intelligence. And Section 4 discusses broad questions about ethical prioritisation, engaging with issues that cut across the first three sections. 

The new economics agenda has two main components. Section 1 centres on general or methodological issues in global prioritisation. This includes empirical and theoretical questions about e.g. cost-effectiveness, forecasting, and optimal philanthropy, as well as related normative questions related to welfare criteria and decision procedures. Section 2 centres on applied issues where further research in economics may be particularly impactful such as the economics of growth, population, inequality, governance and policy, catastrophic risks, and artificial intelligence. 

The new research agenda for psychology and behavioral science outlines key priorities for GPI’s psychology team and the broader field. It emphasizes the role of beliefs, judgments, and decisions in addressing global challenges. Critical decisions about advanced technologies, including artificial intelligence, pandemic preparedness, or nuclear conflict, as well as policies shaping safety, leadership, and long-term wellbeing, depend on human psychology. 

The agendas reflect a diverse range of topics, and we hope that they prompt further work in these areas. Interested researchers are invited to get in touch for potential collaboration. 

60

0
0
3
2

Reactions

0
0
3
2
Comments11
Sorted by Click to highlight new comments since:

Critical decisions about advanced technologies, including artificial intelligence, pandemic preparedness, or nuclear conflict, as well as policies shaping safety, leadership, and long-term wellbeing, depend on human psychology. 

 

I am surprised by this. Ultimately, almost all of these decisions primarily happen in social and institutional contexts where most of the variance in outcomes is, arguably, not the result of individual psychology but of differences in institutional structures, culture, politics, economics, etc. 

E.g. if one wanted to understand the context of these decisions better (which I think is critical!) shouldn't this primarily motivate a social science research agenda focused on questions such as, for example, "how do get decisions about advanced technologies made?", "what are the best leverage points?" etc.

Put somewhat differently, insofar as it a key insight of the social sciences (including economics) that societal outcomes cannot be reduced to individual-level psychology because they emerge from the (strategic) interaction and complex dynamics of billions of actors, I am surprised about this focus, at least insofar as the motivation is better understanding collective decision-making and actions taken in key GCR-areas.

Thanks for your thoughtful comment—I agree that social and institutional contexts are important for understanding these decisions. My research is rooted in social psychology, so it inherently considers these contexts. And I think individual-level factors like values, beliefs, and judgments are still essential, as they shape how people interact with institutions, respond to cultural norms, and make collective decisions. But of course, this is only one angle to study such issues.

For example, in the context of global catastrophic risks, my work explores how psychological factors intersect with the collective and institutions. Here are two examples: 
Crying wolf: Warning about societal risks can be reputationally risky
Does One Person Make a Difference? The Many-One Bias in Judgments of Prosocial Action 

I think we are relatively close and at the risk of misunderstanding.

I am not saying psychology isn't part of this and that this work isn't extremely valuable, I am a big fan of what you and Stefan are doing.

I would just say it is a fairly small part of the question of collective decision making / societal outcomes, e.g. if one wanted to start a program on understanding decision making in key GCR areas better then what I would expect in the next sentence would be something like "we are assembling a team of historians, political scientists, economists, social psychologists, etc." not "here is a research agenda focused on psychology and behavioral science." Maybe psychology and behavioral science were 5-20% of such an effort.

The reason I react strongly here is because I think EA has a tendency to underappreciate social sciences outside economics and we do so at our own peril, e.g. it seems likely that having more people trained in policy and social sciences would have avoided the blindspot of being late on AI governance, for example.

Happy to see progress on these.

One worry that I have about them, is that they (at least the forecasting part of the economics one, and the psychology one) seem very focused on various adjustments to human judgement. In contrast, I think a much more urgent and tractable question is how to improve the judgement and epistemics of AI systems. 

I've written a bit more here.

AI epistemics seems like an important area to me both because it helps with AI safety, and because I expect that it's likely to be the main epistemic enhancement we'll get in the next 20 years or so. 

Thanks, Ozzie! This is interesting. There could well be something there. Could you say more about what you have in mind?

As a very simple example, I think Amanda Askell stands out to me as someone who used to work on Philosophy, then shifted to ML work, where she now seems to be doing important work on crafting the personality of Claude. I think Claude easily has 100k direct users (more through the API) now and I expect that to expand a lot. 

There's been some investigations into trying to get LLMs to be truthful:
https://arxiv.org/abs/2110.06674

And of course, LLMs have shown promise at forecasting:
https://arxiv.org/abs/2402.18563

In general, I'm both suspicious of human intellectuals (for reasons outlined in the above linked post), and I'm suspicious of our ability to improve human intellectuals. On the latter, it's just very expensive to train humans to adapt new practices or methods. It's obviously insanely expensive to train humans in any complex topic like Bayesian Statistics. 

Meanwhile, LLM setups are rapidly improving, and arguably much more straightforward to improve. There's of course one challenge of actually getting the right LLM companies to incorporate recommended practices, but my guess is that this is often much easier than training humans. You could also just build epistemic tools on top of LLMs, though these would generally target fewer people. 

I have a lot of uncertainty if AI is likely to be an existential risk. But I have a lot more certainty that AI is improving quickly and will become a more critical epistemic tool than it is now. It's also just far easier to study than studying humans. 

Happy to discuss / chat if that could ever be useful! 

Added the research agendas tag -- I think this tag is very helpful for keeping track of these, avoiding overlap, finding relevant research questions, etc.

I'd be very curious to know who's working or considering working on questions mentioned in 1.2.1 Cluelessness, Unawareness, and Deep Uncertainty and/or 4.2.1 Severe Uncertainty, in case anyone reading this happens to be able to enlighten me. :)

Thanks for the post. Nice to see an up-to-date version of GPI's research agenda!

Thanks for sharing this! I think these kinds of documents are super useful, including for (e.g.) graduate students not affiliated with GPI who are looking for impactful projects to focus their dissertations on.

One thing I am struck by in the new agenda is that the scope seems substantially broader than it did in prior iterations of this document; e.g., the addition of psychology and of projects related to AI/philosophy of mind in the philosophy agenda. (This is perhaps somewhat offset by what seems to be a shift away from general cause prioritization research.) 

I am wondering how to reconcile this apparent broadening of mission with what seems to be a decreasing budget (though maybe I am missing something)—it looks like OP granted ~$3 million to GPI approximately every six months between August 2022 and October 2023, but there are no OP grants documented in the past year; there was also no Global Priorities Fellowship this year, and my impression is that post-doc hiring is on hold.

Am I right to view the new research agenda as a broadening of GPI's scope, and could you shed some light on the feasibility of this in light of what (at least at first glance) looks like a more constrained funding environment?

EDIT: Eva, who currently runs GPI, notes that my comment paints a misleading picture of the funding environment. While she writes that "the funding environment is not as free as it was previously," the evidence I cite doesn't really bolster this claim, for reasons she elaborates on. I apologize for this.

Thanks for these questions. This probably falls on me to answer, though I am leaving GPI (I have tendered my resignation for personal reasons; Adam Bales will be Acting Director in the interim).

The funding environment is not as free as it was previously. That does suggest some recalibration and different decisions on the margin. However, I’m afraid your message paints a misleading picture and I’m concerned about the potential for inaccurate gossip. I won't correct every inaccuracy, but funding was never as concentrated as your citation of OP's website suggests. For one thing, they are not our only funder, grants support activities that can be many years out into the future, and their timeline does not correspond to when we applied for or received funds. For another example, the decision about the Global Priorities Fellowship was not taken due to money but due to a) the massive amounts of researcher time it took to run (which is not easily compensated for because it was focused at one time of year), b) a judgment that the program could run less frequently and still capture most of the benefits by raising the bar and being more selective. We had observed that - as is common in recruitment - the "returns" from the top few participants were much higher than the "returns" from the average participant. PhD students are in school for many years (in my field, economics, 6 years is common), and while in some of those years they may be more focused on their dissertations, running the program only occasionally still leaves ample opportunity to try to catch the students who might be a particularly good fit while they are in their PhD. Running it less frequently certainly implies lower monetary costs, but in this case that was a side benefit rather than the main consideration.

To return to your main question, the broadening of the agenda is a natural result of both a) broadening the team and b) trying to build an agenda for global priorities research that can inform external researchers. As we’ve engaged with more and more exceptional researchers at other institutions, it has shifted our overall strategy and the extent to which we try to produce research “in-house” vs. build and support an external network for global priorities research. This varies somewhat by discipline, but think of there as just being “stubs” for economics and psychology at GPI, with most of the work that we support done outside of it. I don’t mean "support" monetarily (though we have benefited from an active visitors program), but support with ideas and convenings and discussion. In the past few years, we have been actively expanding our external network, mostly in economics and psychology but also in philosophy, and we anticipate that this external engagement will be the main way through which we have impact. I can talk at length about the structural reasons why this approach makes sense in academia, but that is probably a discussion for another day. (Some considerations: faculty tend to be naturally spread out, and while one might like to have agglomeration effects, this happens more through workshops and the free exchange of ideas, or collaborations, because faculty are tied to institutions whose own incentives are to diversify. You can try to make progress with just focused work by postdocs and pre-docs, but that misses a huge swath of people, and even postdocs and pre-docs become faculty over time. In the long run, if you are being successful, the bulk of your impact has to come from external places. The fact this research is mostly done at other institutions now is a sign that global priorities research has matured.)

As a final note, consider the purpose of this research agenda. It takes time to write a good research agenda - we embarked on it when I arrived almost two years ago, and we quickly did some initial brainstorming, but then we continued on a longer and more thorough deliberative process, such as through working groups focused on exploring whether a certain topic appeared promising. In each of the growth areas you highlight - AI and psychology - we made a few hires. Those hires hit the ground running with their own research, but they also helped further develop and refine the agenda. Developing this agenda helped shape our internal priorities (though not every topic on the agenda is something we want to pursue ourselves), but the main purpose of this agenda is external. We wouldn't have needed to put nearly so much effort into it if it were for internal use. The agenda is simultaneously forward-looking and naturally driven by the hires we already made.

Hope this helps. With apologies, I'm not likely to get into follow-ups as it takes a long time to respond.

Thanks for your very thoughtful response. I'll revise my initial comment to correct the point I made about funding; I apologize for portraying this inaccurately.

Your points about the broadening of the research agenda make sense. I think GPI is, in many ways, the academic cornerstone of EA, and it makes sense for GPI's efforts to map onto the efforts of researchers working at other institutions and in a broader range of fields. 

And thanks also for clarifying the purpose of the agenda; I had read it as a document describing GPI's priorities for itself, but it makes more sense to read it as a statement of priorities for the field of Global Priorities Research writ large. (I wonder if, in future iterations of the document—or even just on the landing page—it might be helpful to clarify this latter point, because the documents themselves read to me as more internal facing, e.g., "This document outlines some of the core research priorities for the economics team at GPI." Outside researchers not affiliated with GPI might, perhaps, be more inclined to engage with these documents if they were more explicitly laying out a research agenda for researchers in philosophy, economics, and psychology aiming to do impactful research.) 

Curated and popular this week
Relevant opportunities