Authors: Siebe Rozendal, Justin Shovelain, David Kristoffersson
Crossposted to LessWrong
Overview
To achieve any ambitious goal, some strategic analysis is necessary. Effective altruism has ambitious goals and focuses heavily on doing research. To understand how to best allocate our time and resources, we need to clarify what our options in research are. In this article, we describe strategy research and relate it to values research, tactics research, informing research, and improvement research. We then apply the lens of strategy research to existential risk reduction, a major cause area of effective altruism. We propose a model in which the marginal value of a research type depends strongly on the maturity of the research field. Finally, we argue that strategy research should currently be given higher priority than other research in existential risk reduction because of the significant amount of strategic uncertainty, and we provide specific recommendations for different actors.
Introduction
Effective altruism is regularly framed as “figuring out how to do the most good, and then doing it.” However, figuring out how to do the most good is not easy. Different groups reach different conclusions. So how do we figure out how to do the most good?
Quite obviously, the first step is to figure out our values. We need to know what we roughly mean by ‘the most good.’ However, once our moral uncertainty is significantly diminished, what is the next step in figuring out how to do the most good? We believe the next step should be strategy research: high-level research on how to best achieve a high-level goal. A brief case was made for strategic analysis by Nick Bostrom in Superintelligence (p. 317):
"Against a backdrop of perplexity and uncertainty, [strategic] analysis stands out as being of particularly high expected value. Illumination of our strategic situation would help us target subsequent interventions more effectively. Strategic analysis is especially needful when we are radically uncertain not just about some detail of some peripheral matter but about the cardinal qualities of the central things. For many key parameters, we are radically uncertain even about their sign…”
This was written in the context of existential risk from AI. We believe it applies to existential risks in general, and that strategy research should be a core part of other effective altruism areas as well. However, this leaves many open questions. What is strategy research? How does strategy research relate to other types of research? When should strategy research be prioritized and when should it not be? These questions are relevant to potential and current researchers, research managers, and funders. The answers are necessary to allocate resources effectively. This article also provides motivation for the founding of the existential risk strategy research organization Convergence. Convergence will be publishing more strategic analyses going forward. This article represents our current best (and somewhat simplified) understanding of the concepts outlined. Because we strive to clarify basic concepts and arguments, we have left out some of the finer details and complexities. We intend to further disentangle, clarify, and develop the ideas in the future. Furthermore, the underlying ideas presented here generalize to other fields, but some fields are in a different stage than existential risk reduction is and therefore need different research priorities.
To clarify what we are arguing for, we first describe strategy research and relate it to other types of research. We then argue that strategy research is important for reducing existential risk. We propose that the marginal value of strategy research depends on the maturity of the research field. We conclude that the current immaturity of the existential risk research field makes further strategy research highly valuable.
What is strategy research?
Strategy research seems intuitively valuable. But what is it about more precisely? Understanding this and the different options in research will help us make good decisions about how to allocate our resources and how to direct our research efforts. In this section, we position strategy research within a framework of different research types in effective altruism, we give an explicit definition, and we distinguish our terms from other commonly used terms.
Five classes of effective altruism research
To put strategy research in context to other types of research, we have developed a classification of different research types. Naturally, the classification is a simplification and research will often not fit neatly into a single category.
The research spine of effective altruism: three levels
We can approach ‘figuring out what to do’ at three different levels of directness (which are inspired by the same kind of goal hierarchy as the Values-to-Actions Chain). Most indirectly, we can ask ‘what should we value?’ We call that values research, which is roughly the same as ethics. From our values, we can derive a high-level goal to strive for. For longtermism values, such a goal could be minimize existential risk.[1] For another set of values , such as animal-inclusive neartermism, the high-level goal could be to minimize the aggregate suffering of farm animals.[2]
More directly, we can ask ‘given our goal, how can we best achieve it?’ We call the research to answer that question strategy research. The result of strategy research is a number of strategic goals embedded in a strategic plan. For example, in existential risk reduction, strategy research could determine how to best allocate resources between reducing various existential risks based on their relative risk levels and timelines.
Most directly, we can ask ‘given our strategic plan, how should we execute it?’ We call the research to answer that question tactics research. Tactics research is similar to strategy research, but is at a more direct level. This makes tactics more specific. For example, in existential risk reduction, tactics research could be taking one of the sub goals from a strategic plan, say ‘reduce the competitive dynamics surrounding human-level AI’, and ask a specific question that deals with part of the issue: ‘How can we foster trust and cooperation between the US and Chinese governments on AI development?’ In general, less direct questions have more widely relevant answers, but they also provide less specific recommendations for actions to take.
Finally, the plans can be implemented based on the insights from the three research levels.
Each level of research requires some inputs, which it then processes to produce some outputs for the more direct level of research. For example, strategy research requires a goal or value to strive for, and this needs to be informed by moral philosophy.[3] When strategy research is skipped, tactics research and implementation are only driven by implicit models. For example, a naive and implicit model is ‘when something seems important, try to persuade influential people of that.’ Acting on such a model can do harm. In emerging research fields, implicit models are often wrong because they have received less thought and have not been exposed to feedback. For tactics research and implementation to be effective, they should often be driven by a well-informed and thoughtfully crafted strategy.
The boundary between strategy and tactics is gradual rather than sharp. Thus, some research questions fall somewhere in between. Note as well that implementation is very simplified here; it refers to a host of actions. Implementation can be ‘doing more research’, but it can also be ‘trying to change opinions of key stakeholders’ or ‘building up research capacity.’
A spine is not sufficient: informing and improvement research
You could say that these levels form a spine: they create a central structure that supports and structures the rest of the necessary building blocks. For instance, strategic clarity makes information more useful by giving it a structure to fit into. To illustrate this, imagine learning a piece of information about an improved method of gene writing. Without any strategic understanding, it would just be an amorphous piece of information; it would not be clear how learning it should affect your actions. However, with more strategic clarity it would be more clear how this new method could affect important parameters, the possible consequences of that, and how one should best react to it.
Still, a spine is not a complete body; it needs additional building blocks. Strategic clarity can not be achieved without being sufficiently informed about the state of the world, or without understanding how to effectively conduct research in a domain.
Therefore, in addition to the research levels, we also identify two additional research classes:[4] informing research and improvement research. Informing research mostly concerns questions about what the world is like. They can be very important questions, and science has built an enormous trove of such knowledge that effective altruism can draw from. Improvement research helps to improve other types of research by identifying important considerations, by improving existing research methods, and by identifying useful models from other fields. Philosophy of science, epistemology, mathematics, economics, and computer science can all be used for improvement research. For example, improvement research focused on ethics could discuss the role that intuitions should have in the methodology of moral philosophy.
A definition of strategy research
Based on the model of the research classes above, we will formulate a definition of strategy research. We want a definition that is simple and captures the core purpose of strategy research. Strategy research is an imprecise concept, so the definition should reflect that. We also want the term to correspond to how people have used it in the past. For these reasons, we propose the following definition for strategy research:
High-level research on how to best achieve a high-level goal.
Thus, the central strategy question is “how to best achieve our high-level goal?” And to achieve a goal, you implicitly or explicitly need to form and act on plans. The challenge of strategy research is to figure out the best plans: those that best achieve a particular high-level goal given the existing constraints. To figure out the best plans, a lot of different activities are necessary. It requires, among others, understanding which parts of the world are relevant for making plans, what actions lead to what consequences, how to compose actions into plans, and how to prioritize between plans.
This means that, in order to figure out the best plans, strategy research will involve a substantial amount of informing research, as well as improvement research. For example, if you have a model of how different risk levels and timelines should affect resource allocation, you also need to know what the different risk levels and timelines are (i.e. informing research) in order to form a comprehensive strategic plan. This research is high-level because it is focused on plans to achieve a high-level goal. In contrast, research on figuring out one’s values is top-level, and research on how to best achieve a tactical goal is low-level.[5]
How do other research terms in effective altruism relate to this framework?
In effective altruism, there have been many terms used for different types of research. Each organization uses a term slightly differently, and it is difficult to find precise definitions of these terms. Let’s briefly consider some research terms in effective altruism that relate to strategy research.
Cause prioritization, prioritization research, global priorities research
These three terms have been used interchangeably to describe roughly similar types of research: prioritization between and within cause areas.[6] Prioritization between cause areas overlaps significantly with values research, although in practice it often does not deal with the more fundamental issues in ethics. Prioritization within cause areas overlaps significantly with strategy research.
This term is mostly used by FHI, and seems to refer to uncovering crucial considerations with regard to improving the long-term future. Crucial considerations can “radically change the expected value of pursuing some high-level sub goal.”[7] A high-level sub goal refers here to things like “increase economic progress” or “decrease funding into AGI research”. The intention appears to focus on the higher-level questions of strategy research (hence “macro”) although FHI also classifies their paper on the unilateralist’s curse as macro-strategy. That concept does not seem to be a crucial consideration, but a strategic consideration for multiple existential risks.
AI strategy
As the term has been used in effective altruism, AI strategy research is simply strategy research focused on reducing existential risk from AI specifically.[8]
Charity evaluation
A number of organizations evaluate interventions and charities, or select charities to donate to (e.g. GiveWell, Animal Charity Evaluators, Open Philanthropy Project, Founders Pledge, Rethink Priorities). Although we would not classify charity evaluation itself as strategy research, it heavily relies on strategic views and many of the mentioned organizations perform some kind of strategy research. For an example for neartermism human-centric values, we would call GiveWell’s research to identify their priority programs strategy research, and would call their evaluation of charities tactics or tactics-informing research.
Why strategy research is important to reduce existential risk
Because of strategic uncertainty, we believe that more strategy research is currently particularly important for reducing existential risk. In this section, we give our main reasons and support them with a model in which the value of a research class depends on the maturity of the field. We then note some other considerations that affect the importance of strategy research and discuss how strategy research could do harm.
The current stage of existential risk research makes strategy research valuable
Strategy research makes the most sense when (1) a community knows roughly what it wants (e.g. reduce existential risk), when (2) it is unlikely that this goal will undergo substantial changes from further research or reflection on values, and (3) when the field has not yet reached strategic clarity. Strategic uncertainty is the stage where the expected value of strategy research is the highest. It is in between the stages of value uncertainty and strategic clarity.
Here we argue that doing strategy research should be a high priority because it is currently unclear how to best achieve existential risk reduction. Strategic uncertainty means that we are uncertain which actions are (in expectation) valuable, which are insignificant, and which are harmful. This implies that there is valuable information to be gained.
We are currently strategically uncertain
To see whether we are actually strategically uncertain, we can ask what strategic clarity would look like. The further we are away from that ideal, the more strategically uncertain we are. With strategic clarity we would know what to do. Specifically, we would know...
- who the relevant actors are
- what actions are available to use
- how the future might develop from those actions
- what good sequences of actions (plans) are
- how to best prioritize plans
- that we have not missed any important considerations
We currently have only a basic understanding of each of these in existential risk reduction. The claim that we are strategically uncertain in the field of existential risk seems widely shared. For example, it is echoed in this post by Carrick Flynn, and again in Superintelligence (p. 317).
Strategic uncertainty implies there is information to be gained
The cost of strategy research is only worth it if it significantly improves our understanding of which actions are (in expectation) valuable, which are insignificant, and which are harmful. Useful information has been gained in the past by uncovering crucial considerations that had a massive influence on our current priorities and plans. These include the separate realizations that AI and synthetic biology might be existential risks. More crucial considerations could be uncovered by strategy research. In addition, there are many current open questions to which different answers would imply substantially different priorities. Examples include ‘how widely is existential risk distributed over different possible causes?’, ‘when would an AI takeoff happen?’, and ‘how likely is human civilization to recover after collapse?’. There is still substantial disagreement on these questions, and progress on these questions would reduce our strategic uncertainty.
In addition, the information needs to be acquirable for a reasonable amount of effort. Strategy research would not be valuable if it was completely intractable. We believe some actors and attempts at strategy research can succeed, but it is hard to predict success beforehand.
Strategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information
Interacting with one’s environment can be highly informative. Interacting with a complex system often yields a substantial amount of information that cannot be obtained by outside observation. For example, it is hard to assess how receptive policy makers are towards existential risk reduction without engaging with them. Interacting with them would allow efficient learning about the domain.
However, this information comes with a risk. Strategic uncertainty also implies that tactical recommendations and direct implementations can be harmful. For example, approaching the wrong policy makers or approaching them in the wrong way can reduce the chance for existential risk to be taken seriously by governments. Taking uninformed action to reduce existential risk may backfire catastrophically in hard-to-reverse and hard-to-predict ways. This reduces the net value of that action.[9]
In contrast, strategy research is less likely to directly cause harm because it gives general and imprecise recommendations. This means they are less likely to be wrong and that they are further away from implementation, which allows for more opportunities to correct mistakes. Strategy research is also self-correcting: it can change its focus and method based on its own generated insights; part of strategy research is to analyze whether we should continue doing strategy research.
A model of research value as a function of a field’s maturity
We have argued that we are currently strategically uncertain with respect to existential risk reduction and that this implies that strategy research is high priority. However, we can make a more complex model than “first solve values, then solve strategy, then solve tactics, then implement plans”. In practice, resources (e.g. capital and labour) are spread over multiple levels of research and resources become specialized. The optimal allocation of marginal resources depends on the current state of knowledge.
We propose a model in which the cumulative value of research levels (i.e. values, strategy, and tactics research) follows s-curves. S-curves are described as “fundamental patterns that exist in many systems that have positive feedback loops and constraints. The curve speeds up due to the positive feedback loop, then slows down due to the constraints.” In this section, we describe the different constraints and the positive feedback loop that creates the s-shaped curve we expect the value of a research level to exhibit.
Early phase: constraints need to be addressed
When research on a particular level (e.g. strategy research) in a particular field (e.g. x-risk reduction) is just getting started, we expect progress to be slowed down by two constraints. The first constraint is a lack of clarity on the higher level. For instance, it is not valuable to try to figure out a good strategy when you are uncertain about your values, because you are much more likely to work on questions that turn out to be not very relevant to your values. The first constraint should be addressed at the higher level.
The second constraint is that doing early research in a field is hard. There is not yet an established paradigm; the problems are messy, entangled, and vague, rather than structured, independent, and clear. What is needed in an early stage is disentanglement - structuring the research field, identifying the central questions, and clarifying concepts. This constraint cannot be addressed by research at a higher level (resolving moral uncertainty does not help us any further in our strategic uncertainty). Consequently, it needs to be addressed head-on, which means that progress will be slow at first.
Middle phase: positive feedback loops create exponential growth
The middle phase starts when the constraints become weaker. Answers to higher-level questions narrow down the range of relevant questions at the lower level. Generally, we expect that a higher proportion of research projects produce value, because irrelevant questions can be better identified beforehand. Furthermore, as the field becomes more structured, each successful piece of research tends to identify multiple new and compelling research questions. This is a period of exponential growth.
Late phase: new constraints arise
The late phase starts when new constraints arise. One constraint is that the big questions have either been solved or have been found intractable. The remaining questions will be either conceptually hard, will require information that is not (yet) available, or will be lower-level questions. At this point, the lower research level has progressed through its own early phase, and the marginal value of doing research at a lower research level surpasses the value of doing research at the current level.
In summary, as our insight progresses, the marginal value of research shifts towards lower-level questions. A good heuristic in this model is to ‘do research at the highest level that is most sensitive to new information’.
Implications of the model
First, this model does not imply that, at any point in time, we should invest all resources into a single level of research. Rather, it suggests where to spend our marginal effort, which depends on the stage we are in. It is often useful to keep some resources in an earlier type, because those resources have become specialized and may be in their best position. For example, moral philosophers who believe in longtermism and existential risk reduction may want to keep working on moral philosophy to improve the rigour of the arguments and potentially uncover new (though most likely more minor) considerations. Furthermore, insights down the line might give rise to new questions higher up, so we should maintain some capacity to answer these questions.
Second, even if most of the marginal expected value today lies within strategy research, it would be useful to invest some marginal resources into tactics research and even some into implementation. There might be some easy-to-uncover tactical insights applicable to a wide range of strategic plans, trying out some tactics research might illuminate some strategic uncertainties, and building the capacity to do tactics research allows for a faster response to strategic insight.
Third, the model assumes that research at each level also involves improvement and informing research. However, this does not mean that improvement, strategy, and strategy-informing research are equally represented in each phase. It is possible that early research involves more improvement than informing research or vice versa, but it is unclear what is more likely.
This model also addresses a common criticism that the effective altruism community frequently receives, namely that the community spends so much time thinking, discussing, and doing research, and so little about taking action. (This criticism is not completely off-mark: there is productive discussion and unproductive discussion.) It is tempting to reply by pointing out all the things the effective altruism community has achieved: moved money to effective charities, set up new organisations, et cetera. However, we can also give another answer based on this model: "Yes, currently we are still focusing on research. But we are progressing at what seems to be the appropriate speed and we will increase the amount of implementation we do as we gain more clarity."
Other considerations that affect the value of strategy research
We believe the reasons in the previous section provide enough support for the claim that strategy research should be highly prioritized. However, there are additional important considerations that affect the strength of our claim. We believe they pose important questions, but have significant uncertainty about them. Analyzing these considerations and providing evidence for them is beyond the scope of this article. We welcome further discussion on these points.
How much time is there for strategic insights to compound or mature into implementation?
Before a robustly good strategy can be implemented, models need to be created and refined and crucial considerations need to be uncovered. This means that strategy research needs enough time to pay off.
The higher one’s credence is that we will encounter an existentially risky event soon - such as the invention of human-level AI - the more likely it is that acting on our current best guess for handling existential risk is better than systematically creating a top-down strategy.
However, we (the authors) are significantly uncertain about the timelines of various existential risks, especially of AI. Therefore we are reluctant to act as if timelines are short. Such short-term actions (e.g. raising the alarm without nuance, or trying to build a capable and reputable research field rapidly) often seem costly or are harmful in the long-term. In addition, many promising strategies can only affect existential risk on a medium or long timeframe. Even discounted by the probability that there is not enough time for them to be impactful, strategies with medium to long timeframe probably have a high expected value.
How likely are the strategic insights to affect concrete actions and the environment?
Information is only valuable if it eventually affects the world. It is possible that there is already enough actionable strategic knowledge available, but that only a few people are willing and able to act on it. In such a case, resources would be better spent on lobbying influential people so they make better decisions for the future of humanity, or on increasing the influence of people who are expected to make good decisions for the future of humanity.
We believe it is hard to assess how likely insights are to affect other actors. Lobbying influential people and coalition building could be the best action for some people. In addition, influence and coalition building may take decades, which would imply that early action on this front is valuable. Nonetheless, some strategy research also takes a long time to fruition.
How likely is it that there are hard-to-reverse developments that require immediate action?
Sometimes it is necessary to act on insufficient information, even if we would prefer to do more strategic analysis. Our hands may be forced by other actors that are about to take hard-to-reverse actions, such as implementing premature national AI policies. New policies by major actors could significantly limit the range of possible and desirable strategies in the future if these policies are implemented prematurely or with a lack of nuance. In cases where key decision makers cannot be persuaded to exercise ‘strategic restraint’, it may be beneficial to step in and do ‘damage control’ even if everything would have been better if no one had moved early.
We believe that some hard-to-reverse actions are in fact being taken, but only some actors could find good opportunities to effectively advocate strategic restraint or do ‘damage control’. Some could even create good conditions for further (strategic) action.
How could strategy research do harm?
Just like for every other project, it’s important to consider the possibility of doing harm. We identify the following three important ways strategy research might do harm.
Strategy research may carry information hazards. Some knowledge may be dangerous to discover, and some knowledge may be dangerous if it spreads to the wrong people. In mapping possible existential risks, strategy research may uncover new ways for humans to risk existential catastrophe. Sharing those possible risks could make them more likely to occur by inspiring malicious or careless actors. Another information hazard is when plans become known to actors with conflicting (instrumental) goals, which allows them to frustrate those plans. Some goals are more likely to conflict with other agents’ goals than others. We generally recommend against publicly identifying these conflicts, unless the other party is definitely already aware of you and your plans.
Strategy research may create strategic confusion. Badly executed or communicated research could confuse, rather than illuminate, important actors. Creating bad research makes it more difficult to find good research. Furthermore, strategy research could overstate the amount of strategic uncertainty and thereby excessively limit the behavior of careful actors while less careful actors could take the lead.
Strategy research may waste resources. It is hard to assess the expected value of specific strategy research projects, even after they have been completed, because it is difficult to trace consequences back to specific research projects. Even if strategy research is not worse than inaction, resources like money and talent still carry opportunity costs: they might have been used better elsewhere. We believe it is very likely that a number of projects are a waste of resources in this sense. This waste can be reduced by effective feedback loops, such as the evaluation of research organizations (like this one).
Discussion
The goal of this article was to describe strategy research more clearly and to argue that it should currently be given a high priority in the field of existential risk reduction. This article has introduced some terms and models that can increase our collective understanding of different research classes, as well as provide input for fruitful discussion. Based on our model, we proposed the heuristic to ‘do research at the highest level that is most sensitive to new information’. We believe that strategy research is currently this highest level in the field of existential risk reduction.
Recommendations
Our main recommendation is to expand the existential risk strategy field. We would like to see more strategy research from both existing and new actors in the field. What follows are some recommendations for particular groups. We encourage readers to come up with other implications.
Researchers: explore the big picture and share strategic considerations[10]
We recommend current existential risk researchers to grapple with the questions of how their research focus fits within the larger picture. We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards. We believe most researchers have some implicit models which, when written up, would not meet the standards for academic publication. However, sharing them will allow these models to be built upon and improved by the community. This will also make it easier for outsiders, such as donors and aspiring researchers, to understand the crucial considerations within the field.
Research organizations: incentivize researchers
Research organizations should incentivize researchers to explore doing strategy research and to write their ideas and findings up in public venues, even if those are provisional ideas and therefore do not meet the standards for academic publication.
Donors: increase funding for existential risk strategy
We encourage donors to explore opportunities to fund new existential risk strategy organizations, as well as opportunities within existing organizations to do more strategy research. Given the newness of the research field and given that there are few established researchers, we believe this is currently a space to apply hits-based giving. Not all projects will pay off, but those that do will make a big difference. As funders learn and the field matures, we expect strategy research to become ‘safer bets’.
Effective altruists: learn, support, start
For those that aspire to move into existential risk strategy research, we recommend exploring one’s fit by doing an internship with a strategy organization or writing and sharing a simple model of a strategy-related topic. People with operations skills can make a large impact by supporting existing strategy research, or even starting up a new organization, since we believe there is enough room for more existential risk strategy organizations.
Limitations & further research
We have simplified a number of points in this article, and it contains a number of gaps that should be addressed in further research.
Focused on basics → elaborate on the details of strategy research
We have strived to make the basics of strategy research clear, but many details have been left out. Further research could delve deeper into the different parts of strategy research to assess what they are, which parts are most valuable, and to examine how to do effective strategy research. This research could also disentangle the difference between ‘narrow’ and ‘broad” strategy research we allude to in footnote 4.
Focused on x-risk → assess the need for strategy research in other areas
This article, because it is written by Convergence, focuses on existential risk strategy. However, we could also have chosen to focus on effective altruism strategy, longtermism strategy, or AI strategy. Further research could approach the strategic question for a wider, narrower, or otherwise different high-level goal. For example, it appears that both community building and animal welfare would benefit greatly from more strategy research.
Incomplete risk analysis → research how strategy research can do harm
We have only briefly discussed how strategy research can do harm, and have argued that it is less likely to do harm because it is more indirect. Further research could investigate this claim further and draft guidelines to reduce the risk of harmful strategy research.
Conclusion
This article has explained, in part, why we believe strategy research is important and neglected. We hope it contributes towards strategic clarity for important goals such as reducing existential risk. Finally, we hope this article motivates other research groups, as well as donors and other effective altruists, to focus more on strategy research.
Acknowledgements
This post was written by Siebe Rozendal as a Research Assistant for Convergence in collaboration with Justin Shovelain, who provided many of the ideas, and David Kristoffersson, who did a lot of editorial work. We are especially grateful for the thorough feedback from Ben Harack, and also want to thank Tam Borine and Remmelt Ellen for their useful input.
Other high-level goals for longtermism have also been suggested, such as Beckstead’s “make path-dependent aspects of the far future go as well as possible.” ↩︎
Interestingly, animal-inclusive neartermism values do not have a clear analogue goal to ‘minimize x-risk’. We understand the focus on farm animals might not be the optimal goal, because it excludes suffering of non-farm animals. ↩︎
Actors do not necessarily need to share the same values to have the same high-level goals. For example, many cause areas would benefit from an effective altruism community that is healthy, full of capable people, and strategically wise. ↩︎
Research often falls under multiple of these classes at the same time. For instance, research into how to build prudent national AI policies may be highly informing to strategy research (important to high-level strategy) and tactical (important to tactical questions of policy making) at the same time. Further, if a researcher is figuring out important improvement and informing issues for strategy, isn't that strategy research? We believe it is; we prefer a “broad” definition of strategy research. In contrast, a “narrow” definition of strategy research would refer only to pure questions of strategy construction. We think there are some important distinctions and tradeoffs here that we hope to illuminate in further work. ↩︎
That something is low-level does not mean it is not high quality, or not important. The level refers to the level of directness: how closely it informs action. ↩︎
Whether some research is between or within a cause area depends on how a ‘cause area’ is defined. However, just like the term ‘prioritization research’, different people use the term ‘cause area’ differently. In this article, we regard ‘existential risk reduction’ as a single cause area. ↩︎
Bostrom (2014). ‘Crucial Considerations and Wise Philanthropy.’ ↩︎
AI governance and AI policy are two related terms. Possibly, AI policy maps to AI-risk specific tactics research and AI governance maps to the combination of AI strategy and AI policy, but we are uncertain about this classification. We also advise against the use of the term ‘AI tactics research’ as it may sound adversarial/military-like. ↩︎
Actions during strategic uncertainty can be harmful, but trying to take action could still provide useful information. This is a good reason to focus current AI policy on the near- and medium-term; those policies will still yield a good (though smaller) amount of information while carrying significantly lower risk of doing long-term harm. ↩︎
Allan Dafoe, director of the Centre for the Governance of AI, has a different take: “Some problems are more important than others. However, we are sufficiently uncertain about what are the core problems that need to be solved that are precise enough and modular enough that they can be really focused on that I would recommend a different approach. Rather than try to find really the highest-leverage, most-neglected problem, I would advise people interested in working in this space to get a feel for the research landscape.” ↩︎
Thank you for this post. I'm usually wary of attempts to establish terminology unless there are clear demonstrations of its usefulness. However, in this case my impression is that public writing on related terms such as 'global priorities research' or 'macrostrategy' is sufficiently vague or ill-focused that I think this post might contribute to a valuable conversation. I'm not sure if the specific terms you're using here will catch on, but I'm happy to see its framework clearly spelled out.
A few reactions:
[Epistemic status: I've only thought about your post specifically for 1 minute, but about the broader issue of the marginal utility of different types of longtermist-relevant research for something between 1 and 1000 hours depending on how you count. Still, I don't think I have very crisp arguments or data to back up the following impressions. I think in the following I'm mostly simply stating my view rather than providing reasons to believe it.]
Thanks for your detailed comment, Max!
I agree, the "spine" glosses over a lot of the important dynamics.
Very good points. Both would indeed be highly valuable to the argument. As follow up posts, I'm considering writing up (1) concrete projects in strategy research that seem valuable, and (2) a research agenda.
Yeah, we're more optimistic than you here. I don't think it's possible to do useful completely "tactics and data free" strategy research. But I do think there is highly valuable strategy research to do that can be grounded with a smaller amount of tactics and data gathering.
What tactics research and data gathering is key? I think this is a strategic question and I think we're currently just scratching the surface.
I agree that it seems like that could easily be a bad use of time for "external researchers" to do that. I'm somewhat optimistic about these researchers examining sub-questions that would inform how to do the allocation.
I think the idea cluster of existential risk reduction was formed through something I'd call "research". I think, in a certain way, we need more work of this type. But it also needs to be different in some important way in order to create new valuable knowledge. We hope to do work of this nature.
Thank you for your response, David! One quick observation:
I agree that the current idea cluster of existential risk reduction was formed through research. However, it seems that one key difference between our views is: you seem to be optimistic that future research of this type (though different in some ways, as you say later) would uncover similarly useful insights, while I tend to think that the space of crucial considerations we can reliably identify with this type of research has been almost exhausted. (NB I think there are many more crucial considerations "out there", it's just that I'm skeptical we can find them.)
If this is right, then it seems we actually make different predictions about the future, and you could prove me wrong by delivering valuable strategy research outputs within the next few years.
Indeed! We hope we can deliver that sooner rather than later. Though foundational research may need time to properly come to fruition.
Hi Max, thank you for your engaging comment and sorry for the slow response! I'll try to address your point one by one.
I think we are more in agreement here than it seems (although I suspect we still disagree). We framed the spine more as a one-way process for the sake of clarity, but it's definitely very iterative where much feedback is needed from lower levels of informing research! Still, I believe there is a lot of strategy research to be done - perhaps especially for questions that are not attractive for academic papers, such as which actions and institutions are needed for reducing x-risk.
I'm going to leave this question to David and Justin, since my collaboration with Convergence was only temporary and they are much better suited to talk about their research plans than I am.
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you're most aware of those?
This is almost exactly the research question I will be looking at for my next project! (To be done at CSER as a summer research intern) I hope I can convince you once the research is done, or already with my research proposal ;)
I agree with you here. We used the term 'interacting' while we should have used 'affecting' or 'changing'. Simply interacting - being part of a system and/or observing it from the inside - can be very valuable and doesn't seem very risky if one doesn't try to make big changes. However, trying to affect/change the environment without sufficient strategic understanding could be very harmful.
My sense is that the best company strategies are informed by a host of strategy research and informing research from a group of employees and consultants. The discussions are of course enormously useful, but they give rise to questions that should be answered by research. In addition, I expect companies' strategies to be much better tuned to their goals than x-risk oriented organizations: companies have a very clear feedback mechanism (profit) that we lack.
My guess is that AI examples were most salient to me because AI has been the area I've thought about the most recently. I strongly suspect there are easy wins in other areas as well.
I'm really glad to see this post – it's what I was thinking of when I asked this question on the forum: https://forum.effectivealtruism.org/posts/icTEffSCdtLSoSrqi/might-the-ea-community-be-undervaluing-meta-research-on-how
The methodological diversity necessary to get any consilience in highly abstract areas makes it very hard for donors to evaluate such projects. Many of the ideas that form the basis of the AI memeplex, for instance, came from druggy-artist-scientists originally. So what happens in practice is that this stuff revolves around smoking gun type highly legible philosophical arguments, even though we know this is more hedgehog than fox, and that this guarantees we'll only, on average, prepare for dangers that large numbers of people can comprehend.
Concretely: the more money you have, the higher the variance on weird projects you should be funding. If the entire funding portfolio of the Gates' foundation are things almost everyone thinks sound like good ideas, that's a failure. It's understandable for small donors. You don't want to 'waste' all your money only to have nothing you fund work. But if you have a 10 billion and thus need to spend 500 million to 1 billion a year just to not grow your fund, you should be spending a million here and there on things most people think are crazy (how quickly we forget concrete instances like initial responses to the shrinking objects to nanoscale idea?). This is fairly straightforward porting of reasoning from startup land.