This is a crosspost for Why I don’t prioritize consciousness research by Magnus Vinding.
For altruists trying to reduce suffering, there is much to be said in favor of gaining a better understanding of consciousness. Not only may it lead to therapies that can mitigate suffering in the near term, but it may also help us in our large-scale prioritization efforts. For instance, clarifying which beings can feel pain is important for determining which causes and interventions we should be working on to best reduce suffering.
These points notwithstanding, my own view is that advancing consciousness research is not among the best uses of marginal resources for those seeking to reduce suffering. My aim in this post is to briefly explain why I hold this view.
Reason I: Scientific progress seems less contingent than other important endeavors
Scientific discoveries generally seem quite convergent, so much so that the same discovery is often made independently at roughly the same time (cf. examples of “multiple discovery”). This is not surprising: if we are trying to uncover an underlying truth — as per the standard story of science — we should expect our truth-seeking efforts to eventually converge upon the best explanation, provided that our hypotheses can be tested.
This is not to say that there is no contingency whatsoever in science, which there surely is — after all, the same discovery can be formalized in quite different ways (famous examples include the competing calculus notations of Newton and Leibniz, as well as distinct yet roughly equivalent formalisms of quantum mechanics). But the level of contingency in science still seems considerably lower than the level of contingency found in other domains, such as when it comes to which values people hold or what political frameworks they embrace.
To be clear, it is not that values and political frameworks are purely contingent either, as there is no doubt some level of convergence in these respects as well. Yet the convergence still seems significantly lower (and the contingency higher). For example, compare two of the most important events in the early 20th century in these respective domains: the formulation of the general theory of relativity (1915) and the communist revolution in Russia (roughly 1917-1922). While the formulation of the theory of general relativity did involve some contingency, particularly in terms of who and when, it seems extremely likely that the same theory would eventually have been formulated anyway (after all, many of Einstein’s other discoveries were made independently, roughly at the same time).
In comparison, the outcome of the Russian Revolution appears to have been far more contingent, and it seems that greater foreign intervention (as well as other factors) could easily have altered the outcome of the Russian Civil War, and thereby changed the course of history quite substantially.
This greater contingency of values and political systems compared to that of scientific progress suggests that we can generally make a greater counterfactual difference by focusing on the former, other things being equal.
Reason II: Consciousness research seems less neglected than other important endeavors
Besides contingency, it seems that there is a strong neglectedness case in favor of prioritizing the promotion of better values and political frameworks over the advancement of consciousness research.
After all, there are already many academic research centers that focus on consciousness research. By contrast, there is not a single academic research center that focuses primarily on the impartial reduction of suffering (e.g. at the level of values and political frameworks). To be sure, there is a lot of academic work that is relevant to the reduction of suffering, yet only a tiny fraction of this work adopts a comprehensive perspective that includes the suffering of all sentient beings across all time; and virtually none of it seeks to clarify optimal priorities relative to that perspective. Such impartial work seems exceedingly rare.
This difference in neglectedness likewise suggests that it is more effective to promote values and political frameworks that aim to reduce the suffering of all sentient beings — as well as to improve our strategic insights into effective suffering reduction — than to push for a better scientific understanding of consciousness.
Objection: The best consciousness research is also neglected
One might object that certain promising approaches to consciousness research (that we could support) are also extremely neglected, even if the larger field of consciousness research is not. Yet granting that this is true, I still think work on values and political frameworks (of the kind alluded to above) will be more neglected overall, considering the greater convergence of science compared to values and politics.
That is, the point regarding scientific convergence suggests that uniquely promising approaches to understanding consciousness are likely to be discovered eventually. Or at least it suggests that these promising approaches will be significantly less neglected than will efforts to promote values and political systems centered on effective suffering reduction for all sentient beings.
Reason III: Prioritizing the fundamental bottleneck — the willingness problem
Perhaps the greatest bottleneck to effective suffering reduction is humanity’s lack of willingness to this end. While most people may embrace ideals that give significant weight to the reduction of suffering in theory, the reality is that most of us tend to give relatively little priority to the reduction of suffering in terms of our revealed preferences and our willingness to pay for the avoidance of suffering (e.g. in our consumption choices).
In particular, there are various reasons to think that our (un)willingness to reduce suffering is a bigger bottleneck than is our (lack of) understanding of consciousness. For example, if we look at what are arguably the two biggest sources of suffering in the world today — factory farming and wild-animal suffering — it seems that the main bottleneck to human progress on both of these problems is a lack of willingness to reduce suffering, whereas a greater knowledge of consciousness does not appear to be a key bottleneck. After all, most people in the US already report that they believe many insects to be sentient, and a majority likewise agree that farmed animals have roughly the same ability to experience pain as humans. Beliefs about animal sentience per se thus do not appear to be a main bottleneck, as opposed to speciesist attitudes and institutions that disregard non-human suffering.
In general, it seems to me that the willingness problem is best tackled by direct attempts to address it, such as by promoting greater concern for suffering, by reducing the gap between our noble ideals and our often less than noble behavior, and by advancing institutions that reflect impartial concern for suffering to a greater extent. While a better understanding of consciousness may be helpful with respect to the willingness problem, it still seems unlikely to me that consciousness research is among the very best ways to address it.
Reason IV: A better understanding of consciousness might enable deliberate harm
A final reason to prioritize other pursuits over consciousness research is that a better understanding of consciousness comes with significant risks. That is, while a better understanding of consciousness would allow benevolent agents to reduce suffering, it may likewise allow malevolent agents to increase suffering.
This risk is yet another reason why it seems safer and more beneficial to focus directly on the willingness problem and the related problem of keeping malevolent agents out of power — problems that we have by no means found solutions to, and which we are not guaranteed to find solutions to in the future. Indeed, given how serious these problems are, and how little control we have with regard to risks of malevolent individuals in power — especially in autocratic states — it is worth being cautious about developing tools and insights that can potentially increase humanity’s ability to cause harm.
Objection: Consciousness research is the best way to address these problems
One might argue that consciousness research is ultimately the best way to address both the willingness problem and the risk of malevolent agents in power, or that it is the best way to solve at least one of those problems. Yet this seems doubtful to me, and like somewhat of a suspicious convergence. Given the vast range of possible interventions we could pursue to address these problems, we should be a priori skeptical of any intervention that we may propose as the best one, particularly when the path to impact is highly indirect.
Objection: We should be optimistic about solving these problems
Another argument in favor of consciousness research might be that we have reason to be optimistic about solving both the willingness problem and the malevolence problem, since the nature of selection pressure is about to change. Thanks to modern technological tools, benevolent agents will soon be able to design the world with greater foresight. We will deliberately choose genes and institutions to ensure that benevolence becomes realized to an ever greater extent, and in effect practically solve both the willingness problem and the malevolence problem.
But this argument seems to overlook two things. First, there is no guarantee that most humans will make actively benevolent choices, even if their choices will not be outright malevolent either. Most people may continue to optimize for things other than impartial benevolence, such as personal status and prestige, and they may continue to show relatively little concern for non-human beings.
Second, and perhaps more worryingly, modern technologies that enable intelligent foresight and deliberation for benevolent agents could be just as empowering for malevolent agents. The arms race between cooperators and exploiters is an ancient one, and I think we have strong reasons to doubt that this arms race will disappear in the next few decades or centuries. On the contrary, I believe we have good grounds to expect this arms race to intensify, which to my mind is all the more reason to focus directly on reducing the risks posed by malevolent agents, and to promote norms and institutions that favor cooperation. And again, I am skeptical that consciousness research is among the best ways to achieve these aims, even if it might be beneficial overall.
Acknowledgments
For their comments, I thank Tobias Baumann, Winston Oswald-Drummond, and Jacob Shwartz-Lucas.
Consciousness research seems to be very neglected to me, relative to its importance in understanding the world we live in. Nonhuman consciousness is especially neglected. Should it prioritized over other things? That seems to me to turn on tractability. Consciousness research doesn’t seem particularly tractable (though there are low hanging fruit), but neither does research to expand value systems and political frameworks to care about all sentient creatures.
Do you have instincts or perhaps even analysis about what interventions to expand "good" values look like? I am interested as my interest in values was how I came into EA thinking to begin with and have since thought more and more that it is too large a task to tackle. I know of the Sentience Institute but my feeling is that they are more about research and less about actually going out and spreading positive values.
Hi Ulrik,
Thanks for the question, and sharing you story! I do not think I have great insights here, but I can at least share some relevant resources (you may well be aware of them already, but they could still be useful to other readers):
I have the impression the field is still quite nascent, and share your sense that the above organisations are mostly doing research. CLR's guide has a section on approaches to s-risk reduction, but it seems to point towards further investigation, as opposed to specific interventions. Cooperative AI was what I found in the guide which seemed more like an intervention with direct applications, but it is targeted at improving cooperation among advanced AI models, and you may be looking for something more broad.
Maybe concern for the suffering of factory-farmed animals is a decent way to promote positive values, but I have not thought much about this. I mostly think the best animal welfare interventions are a super cost-effective way of decreasing nearterm suffering.
It would be useful to ensure that frontier AI models have good values. So I have wondered about whether people at frontier AI labs (namely OpenAI, Anthropic, and Deepmind) and organisations like Arc Evals should be running a few tests to see assess:
I have not checked whether people are looking into this, but it seems worth it as a more empirical type of intervention to influence future values. Then maybe the models could be (partly) aligned based on their views on questions like the above.
I really like your suggestion on just plain AI value alignment perhaps being the most effective. In a sense, even though these are perilous times we do perhaps have the opportunity to massively impact the values of millions of machine intelligences so that even though humans stick with pretty much the same values (something I perceive being really hard to change), we will "improve" the average values globally. Thanks for your thoughtful response - it certainly have made me see this issue from a new perspective!
Thanks for the kind words, and also for clarifying my own views!