Epistemic Status: I've researched this broad topic for a couple of years. See Part I for details of what I've read so far and my opinions on it. This project will be my PhD thesis. I have some of the main ideas drafted out. However, I still have most of the PhD to go, so a substantial amount of opinions and particular formulations on the matter are subject to changes.
Disclaimer: I have received some funding as a Forethought Foundation Fellowship in support of my PhD research. But all the opinions expressed here are my own.
Index.
Part I - Bibliography Review
Part II - Seven Philosophical Takes and Opinions
Future parts will discuss: moral circle expansion, normative theories of progress, empirical takes and opinions, possibilities for measuring moral progress, and policy implications. They will be published as my research progresses.
Introduction.
This is my second research-related post, building on the previous Bibliography Review on the topic of Moral Progress, a topic in which I've been working for a couple of years.
As I outlined in the introduction of the first post, since the finished book manuscript won't be ready until around 2026, I want to post some preliminary takes or opinions that I have on the subject of Moral Progress. These are underdeveloped bullet points that I aim to develop further in the manuscript that should eventually become a book.
However, I will have to say that in academia there is always the danger of "getting scooped". I don't want to outline my ideas in full detail before they are published in a journal or a book, since someone could just take them, rewrite them a bit and publish them under their own name. It's a bit of an odd position to be in, so I hope you can forgive some level of vagueness.
Seven Philosophical Takes and Opinions.
Okay, so the first post had a lot of books and articles. But what have I found out until now? What are my opinions?
Well, keep in mind that whatever I say here is preliminary and subject to change, but some key points from my work that I have written about so far, or that I aim to develop in writing include the following. Here they might be presented "hot takes" or opinions, I will develop and defend them further in the future, but I hope they will help guide your thinking on the topic of moral progress.
I present them in an order similar to how I present them on the PhD thesis, which I think makes the most sense. It starts from the conceptual and philosophical issues, which I explore here, and goes towards the empirical and policy recommendations towards the end, which I'll explore in later posts.
(1) There are at least three different forms of skepticism against the idea of moral progress. But I believe they're all wrong.
In my work, I aim to distinguish at least three importantly different ways in which somebody might want to resist accepting idea of moral progress: (1) The Liberal Skeptic, (2) The Illiberal Skeptic, and (3) The Metaethical Skeptic.
(1) The Liberal Skeptic, who disagrees on empirical grounds. They might think that if the world became more liberal (in a broad sense, e.g. respected individual human rights more), that would be a good thing, but that empirically the world hasn't actually become better in this way.
My main criticism of such views is that I simply don't see how such a position is tenable on empirical grounds. Just to take a look at what Pinker (2011, 2018), Rosling (2017), Norberg (2016) or many economists have written. Or just take a look at data on the state of the world in the World Bank, Our World in Data, or Gapminder.
You don't have to agree on every detail, but I think certain easy facts are staring us in the face. These are cases like the abolition of slavery and the lessening of discrimination of people of color, women's suffrage and feminism, the reduction of cruel punishments like public tortures/executions and foot binding, plausibly animal rights and welfare, and plausibly care for future generations. For other forms of discrimination, just take a tour through Wikipedia on the topic.
Importantly, the goodness of these improvements doesn't depend on adhering any controversial moral theory, they are accepted as improvements by most plausible versions of common-sense ethics, consequentialism, deontology, virtue ethics, contractualism, etc.
So my claim is that Liberal Skeptics are simply empirically misinformed, perhaps because they are cognitively biased. Rosling in Factfulness outlines some plausible biases that could be in effect.
I believe most EAs aren't liberal skeptics. They think these points are trivial. Yet I have faced quite a bit of pushback when presenting the empirical facts of progress in venues other than EA. People tend to resist many of the empirical facts, or their interpretation, or attempt to emphasize the exceptions or bad facts (such as inequality, climate change, and catastrophic or existential risk... which is fair in some cases, but people take it as if the amount of bad things in the world is somehow constant!). So, many books on the topic of moral and social progress might require a section on the advances we have achieved so far, because most people are just deeply wrong about the state of the world.
Some other liberal skeptics disagree with the idea of "moral progress" because they misunderstand what a theory of progress entails. They might think believing in progress entails believing in linear progress without any instances of regress, or that it entails believing in "iron laws" of progress (called historicism in the philosophy of history), or that it entails believing in teleology, or the idea that history has a final end state.
This had been the case from the ancient era until quite recently. But I believe a plausible naturalistic theory of progress should adhere to none of these. I believe the empirical study of human behavior from social science has greatly superseded these problematic philosophical notions.
So that doesn't prove the Liberal Skeptic right in their skepticism. Rather, we should reduce our theory of progress to more plausible claims, such as that history has rough patterns or mechanisms that can be studied through social science. And that empirically, we can show that we have morally improved in our beliefs and/or behaviors. So the skepticism is not warranted.
There are some weaker versions of this Liberal Skeptic claim that I could deal with, such as the objection that slavery, women's suffrage, etc. was low hanging fruit and we haven't come far. I actually accept some weaker forms of this claim, so it's not contrary to my theory of progress.
Another objection is that these problems are self-generated (humans invented slavery, and then humans got rid of it, so we're "back to the start"). In a sense, that's right, but slavery has been around for a very long time. We shouldn't be misled that life in pre-state societies was morally just or pleasant. I think we have enough evidence for tribal warfare (e.g. Our World in Data on archeological violence) and even the behavior of apes (such as the work of Frans de Waal or Christopher Boehm) as quite brutal.
Here I have only talked about humans. A further objection is that by expanding human power, we have also expanded our capacity to do harm, which has grown faster than the moral good we have done, particularly if we take non-human animals into account. If we take the suffering of animals such as factory farmed chickens, fish, or cows seriously, we might come to the result that we are causing more harm than good just due to factory farming, probably even if we have a 1.000-to-1 discount rate. Humans kill 56 billion animals for food each year! (Thanks to Arturo Macias in the comments for helping make this explicit!)
I don't really have a counterargument to this last point. I guess I agree with it. It seems like big-number scenarios like animal suffering (and perhaps longtermism) can really flip direction of the discussion from a consequentialist standpoint. Perhaps one thing to mention is that it seems unlikely that we have really changed our moral behavior towards them. It's the same behavior of killing animals that have always done, but just greatly amplified by technology. So it seems a form of social regress, rather than moral in a narrow way.
(2) The Illiberal Skeptic. This is the conservative or reactionary, as well as accounts of politics deemed perfectionist. The Stanford Encyclopedia of Philosophy defines perfectionism as "writers [that] advance an objective account of the good and then develop an account of ethics and/or politics that is informed by this account of the good. Different perfectionist writers propose different accounts of the good and arrive at different ethical and political conclusions. But all perfectionists defend an account of the good that is objective in the sense that it identifies states of affairs, activities, and/or relationships as good in themselves and not good in virtue of the fact that they are desired or enjoyed by human beings.".
I haven't yet written much about why they're wrong, but basically the idea within contemporary political philosophy is that such views are problematic or flawed. They go too far into imposing one particular view of "the good life". Pretty much all anglophone, Rawls-inspired accounts of political philosophy reject such views, so this rejection of perfectionism is not mine alone.
Let me give the quick gist. Perhaps the most recent type of perfectionism in recent decades has been communitarianism, defended by authors such as Michael Sandel, Alasdair MacIntyre, and Charles Taylor, against Rawlsian liberals in the 80s.
I believe there was a lot of confusion in these debates. For example, communitarians said that liberals had a view of the self that was too atomized and individualistic. But it wasn't exactly clear what the communitarians were claiming that the liberals were defending, and a lot of the debate was composed of liberals saying "No, we're not saying this". For more details, see Simon Caney "Liberalism and communitarianism: a misconceived debate" or Allen Buchanan "Assesing the communitarian critique of liberalism".
(3) The Metaethical Skeptic. This is the error theorist, moral nihilist, or extreme moral relativist, who doesn't want to adhere to any notion of moral progress.
To refute this skeptic, I think there are reasons why the notion of progress is more attractive than the concept of moral truth. I think a notion of moral progress is less metaphysically and epistemologically weird or suspicious than the notion of moral truth. I think one way to make the error theorist accept a notion of progress is to relativize it. We can achieve a lot by asking the error theorist what they care about, even in an instrumental way. It is likely that they care about human welfare (or the welfare of all sentient beings), happy mental states, flourishing, achieving human capabilities. These things are not metaphysically suspect. They're much less problematic, and the error theorist already accepts this in their practical life and moral judgements.
Then we simply say that we can make a conditional statement in the form "If you care about human welfare/flourishing/capabilities, then you should call the change from state of affairs X with less human welfare to state of affairs Y with more human welfare an instance of moral progress".
Of course, this by itself is not enough to rule out error theorists who want to increase suffering, and labelling increased suffering as progress. I think I'm ok with letting that part of the debate go. It seems out of the scope of my work to deal with such views. But these forms of skepticism that say "creating pain is morally good" seem implausible, and need an argument in favor of them.
There's more we can say against the metaethical skeptic, but I think one of the main things we can do is to develop an attractive and compelling view or theory of moral progress that even a skeptic might want to adopt.
(2) There are many interesting conceptual distinctions at play, such as Individual/Collective, Local/Global, Beliefs/Practices, Moral/Social...
Moral progress is not one simple thing to analyze. It is not a monolith, in the same way that scientific progress and technological progress are not monoliths either, as shown by the people working in Progress Studies (just see how big the bibliography is!).
In fact, I believe moral and social progress is an entire area of research that might deserve its dedicated promotion and funding. It has overlapping points with normative moral philosophy when we are making normative judgements, but also overlaps with many of the social sciences such as economics, sociology, anthropology, psychology, neuroscience, and primatology, when talking about the empirical details.
Just to help you out navigate these waters, some simple key distinctions to keep in mind while navigating the topic of moral and social progress include:
(1) Individual vs Societal vs Global Progress. An intuitive distinction is the level of analysis that we are talking about. Are we analyzing the moral beliefs and actions of people, of entire societies (groups, towns, nation-states...) or of the entire world?
There might be arguments why we might want to commit to a form of methodological individualism, because it would seem odd to say that all individuals in a society can be morally progressive yet the society, which is made out of a collection of these individuals, is regressive. So it seems like Social or Global Progress supervenes or is grounded on individual beliefs and actions. (Though judgement aggregation has technical problems, see here).
But in practice, there might be reasons why we might want to analyze Societal Progress and Global Progress at the macro level, from a sociological standpoint, rather than as the collective of many individual psychological states beliefs or behaviors. This might remind some of the issue of whether macroeconomics is reducible to microeconomics. It's a bit of a similar issue.
Some examples include the fact that institutions, laws, social pressures, and customs, make people behave in weird ways that might be counter to their personal moral beliefs, such as fear of authority and punishment, conformism, or lack of information.
So I'd analyze them apart from each other. Individual Progress would be focused on an individual's ethical development, such as becoming having better moral beliefs or attitudes. One can become more morally impartial, or achieve greater coherence in their moral system, or become more empathetic or tolerant. This change is often driven by personal reflection, moral education, or life experiences, and typically just impacts the individual and their immediate surroundings (unless they have a lot of power or a big sphere of influence, such as by having a lot of money or being a politician).
Meanwhile, Collective Progress involves broader ethical improvements within a community or society, like the recognition of human rights or social acceptance of marginalized groups. While it might appear as an aggregate of individual progress, collective progress often results from distinct mechanisms like cultural shifts, legislation, or social movements. Its impact is more far-reaching if it gets organized, by influencing national or international policies.
The mechanisms through which they operate also differ, individual progress might stem from cognitive dissonance leading to moral reform, whereas collective progress can emerge from societal changes or organized actions of social movements.
The rate of progress will also likely differ, individuals might experience sudden moral epiphanies (e.g. a meat-eater becoming vegan after reading Peter Singer), while societal shifts generally take longer, having to overcome systemic barriers and the inertia of the status quo (e.g. banning factory farming can take decades).
(2) Progress in Beliefs vs Progress in Practices. Progress in Beliefs refers to changes in the content of moral beliefs, as the name suggests. Progress in Practices relates to how individuals might act and what habits they have. Cases of progress in belief without progress in practice include the classic cases of weakness of the will (akrasia), such as a person who deems meat eating to be immoral, but who hasn’t managed to change their habits because their moral motivations haven’t overridden other motivations that they might have or their desire or habit for meat eating.
Furthermore, Progress in Practices perhaps also includes changes in the structure or method of moral reasoning, such as being open to listening to moral arguments and reasons, without necessarily altering the content of the first-order moral beliefs themselves (e.g. still being a meat eater). An example might be a society adopting more systematic or democratic ways of making moral decisions, which over time leads to progress in beliefs or in other further practices. (See ideas such as Communicative Rationality and Deliberative Democracy in the work of Jurgen Habermas and Seyla Benhabib for some inspiration)
(3) Narrow Intentional Moral Progress vs Wide Unintentional Social Progress. I call Moral Progress in a strict sense to narrow and intentional deliberate advancements in moral thinking, typically led by individuals in engaged in ethical discourse, such as philosophers, journalists, policymakers, and intellectuals. It's not about broad societal shifts but about targeted developments in ethical belief, knowledge, or understanding, like formulating new ethical theories or applying moral reasoning to novel problems. Consider the concepts of rights, deontology, trolley problems, the experience machine, population ethics. These concepts were developed by a particular person or group of people at some point in time. (That doesn't mean they were invented, you could argue rights were simply discovered. Here I'm just referring to the conceptual discovery in an epistemic sense, in the same way that Newton formulated the theory of gravity).
On the other hand, I call Social Progress to the wide and unintentional progress that encompasses social advancements that occur independently of explicit moral intentions. Unlike moral progress, social progress often results from other types of developments, like economic or technological changes. For example, if people like Robert Wright, Steven Pinker and others are to be believed, an increase in societal wealth can lead to reduced violence and improved living conditions, exemplifying how material circumstances and non-moral factors can promote a more ethical society.
I believe a complicated example is how is the rise of the bourgeoisie, driven by economic interests, led the way for major moral transformations during the Enlightenment, American Revolution, French Revolution, etc.
In this sense, favorable ethical outcomes can arise from actions that are not explicitly driven by moral considerations. Social progress is driven by social, institutional, or technological innovation, and it can circumvent moral dilemmas that arise in conditions of material scarcity. We can avoid difficult distributive moral dilemmas of how to morally distribute food, organs, and wealth, if we are able to achieve a post-scarcity society where those needs are covered. This might be a crucial reason why many ethical developments (Universal Human Rights, greater cosmopolitanism, more care for animal welfare...) have taken place after the Industrial Revolution.
(3) Explicit moral philosophizing perhaps hasn't mattered that much historically in creating moral improvements until now, but it will probably become more important in the future.
Following from the previous point, I think that wide social progress has been more impactful in the past, while the impact of narrow moral progress might slowly become stronger in the future.
Regarding moral progress, you could say that people like Locke, Kant, and Mill helped with reinforcing liberalism, Hegel and Marx with Marxism, and Hayek with neoliberalism have been massively influential philosophers in the real world due to their ideologies (for better or worse). I freely grant that point.
But I think that many of the moral improvements that we consider clear or paradigmatic, like the abolition of slavery, woman's suffrage and feminism, and the rise of cosmopolitanism, the animal welfare movement, and longtermism are actually not that philosophically deep, and are obtained through relatively simple form of philosophical reasoning. So moral progress in the sense of deep moral thinking has probably not been the main driver. Here's the basic argumentative structure for all of them:
- Agents A and B should be treated equally unless there are morally relevant differences between A and B.
- There are no morally relevant differences between A and B.
- So Agents A and B should be treated equally.
Well, that wasn't so difficult, I just saved you over 200 years of social struggle for slavery abolitionism, women's rights, cosmopolitanism, animal rights, and longtermism right there!
What has been the problem, then? The issue has been, first, what Kitcher (2021) calls altruism failures, which are failures to consider the position of others at all. Most people in human history didn't seriously consider or ponder the moral standing of slaves, women, enemy nations, or future generations in a serious way. If you travelled back in time and proposed human rights or universal suffrage to them, people would have mocked you. For example when Mary Wollstonecraft wrote A Vindication of the Rights of Woman, one of the first feminist texts, Thomas Taylor wrote A Vindication of the Rights of Brutes, arguing that we might as well give rights to animals as well!
Second, the dominant party benefited. Slaveowners benefited from slaves, men over women, dominant nations over subjugated ones, humans over nonhuman animals, and current generations over future generations rule or have ruled with an iron fist. They had the power, and people don't want to upset the status quo, particularly when it benefits them.
Third, when the dominant party starts to be questioned, they try to argue that there are relevant moral differences because there are empirical differences. Slaveowners and men will say something that black people and women are inferior or stupid, and try to pass their prejudices as science. Over time, they get caught. The observations pile up, the science proves wrong. But such a change in views takes time to be widely accepted in society.
This is a bit different for the moral cases of cosmopolitanism, animals, and longtermism, which have to do with a rise in the importance of moral impartiality. Let me offer some quick ideas. It's fine if you don't strictly accept them. I'm just hinting at some possibilities.
For cosmopolitanism, I believe an argument from the veil of ignorance applied across all nations, arguing that the place where you're born is morally arbitrary and thus the natural order is unjustified can do a lot of the heavy lifting. (Rawls didn't like this approach if you read his Law of Peoples, but I believe Brian Barry and others developed it, and Rawls' Law of Peoples was heavily criticized for not extending his veil of ignorance argument across nations.)
For animal welfare, Singer's argument from marginal cases, the argument that there are disabled people who are less capable than nonhuman animals of stuff like rational thought, has done great part of the work . Additionally, Mark Rowlands has a veil of ignorance argument for animals, if you follow an argument that it's morally arbitrary what animal you're born as, behind a veil of ignorance. (I believe it was here?)
For longtermism the jury is still out, but I believe that the fact that people don't have preferences right now (or current mental-state welfare) doesn't mean that future people won't have preferences in the future that also should be respected. So I believe we should hold moral impartiality for future generations (discounted for uncertainty, but that's an empirical discounting, not a moral one). This is a very strong claim! But I believe it's not that different from cosmopolitanism. One is in space, another in time. There can also be a veil of ignorance argument for this. Why is it relevant, from a sort-of impartial "point of view of the universe", whether you're born in 2000 or 2050? From the moral point of view of impartial evaluation, it's all the same.
It's fine if I didn't convince you with my arguments. But a big part of the point is that they all share argumentative structure, a move towards moral impartiality due to irrelevant moral differences. If you look at it that way, moral philosophy seems quite unimpressive until now.
So I think what we have been doing until now in the history of ethics is picking low-hanging ethical fruit. Once we pick lots of it, ethics will get subtler and harder, and only AI ethicists (or ethicists + AIs working as a team) will be able to solve ethical dilemmas, since they will get quite complicated, by covering a massive amount of thought experiments variations in seconds, and applying methods of consistency reasoning, reflective equilibrium, and other stuff relevant to moral reflection.
(4) The relative (or differential) speed of progress in different domains might be what matters for a flourishing future.
Toby Ord has made this point in this EAG talk presenting an unpublished paper he's been working on for a long time. To summarize his talk in a key takeaway, he argues that just advancing progress all-across-the-board, in all domains, just accelerates the pace of humanity. So earlier humans will enjoy higher welfare, but humanity becomes extinct faster. So promoting all kinds of progress equally doesn't change overall human welfare, and doesn't change things from a morally impartial perspective. We get higher welfare sooner rather than later, but also get dangerous toys (dangerous AGI, easy to make manufactured pandemics...) sooner rather than later.
What we should aim for, instead, is differential progress. That is, accelerating some domains of progress so that they arrive comparatively earlier than others, providing us with safe or protective technologies, social norms, and institutions first, before we develop the dangerous ones. This is a reason why we might want to accelerate social and moral progress, as well as safety technologies, before we reach potentially unsafe technologies. (See also Bostrom's Principle of Differential Technological Advancement)
This serves as an argument for why we might want to speed up moral progress (and related forms, such as improving our institutions) rather than speeding up technological progress, particularly of potentially dangerous technologies.
A bit of a caveat, however. It's worth keeping in mind that the interactions between material conditions such as socioeconomic arrangements (e.g. the development of capitalism) and technologies (e.g. the industrial revolution, AGI) tend to have an effect on our moral and political beliefs in ways we cannot really yet predict. (Particularly if cultural evolutionists like Henrich 2020 are to be believed).
(5) Progress and Secular Ethics.
Theories of progress attempt to be morally uncontroversial, remaining uncommitted to particular theories within normative ethics (deontology, consequentialism, virtue ethics, contractualism, etc.). So I will not enter this discussion here.
But let me say some thing related he relationship between religion and morality, which is a complex one, to say the least. Derek Parfit claimed at the very end of Reasons and Persons that we might be at the beginning of moral history, given that secular morality has just recently come apart from religion. While there are exceptions, let's just say ethics has been heavily religious or heavily influenced by religion for most of human history.
Let me just give some broad guidelines for secular progressive ethics, from observations that come from the more empirical side of things, rather than from moral controversy. These ideas need further development, and some people have been working on them a little, but I'd like to see more work pushing and developing them further.
(1) Secular humanism, a la Voltaire, Pinker and others. This is quite a vague set of values that has actually evolved over time, but it's something to do with human (and nonhuman) flourishing as one of the main pillars of secular ethics. One advantage of this is that values can be rationally criticised and revised in the light of new moral beliefs and evidence.
(2) Abandoning historicism, the idea that history "has a plan" planned by God (Kant's philosophy of history) or by the Collective Spirit of Humanity (Hegel's philosophy of history) or by material conditions (Marx's philosophy of history) that is unfolding as an "iron law". Summary here.
I think such historicism is implausible. But whatever your beliefs, secular moral discussion should not rely on it. So we should abandon strong teleology / historicism. Adopt a naturalized teleology or no teleology at all. The claim is something like "Society improves, but it does so contingently, with lots of bumps along the road (think of nazis or other totalitarians)"
(3) Darwinism is the "universal acid", as Daniel Dennett (1995) claims. This means that a post-darwinian, agnostic world should be more fallibilist towards the values it holds. This means taking into consideration how evolution has shaped our moral predispositions towards a nepotistic bias, due to us living in scarcity conditions for most of evolutionary history. Evolutionary theory should be an important insight for descriptive morality.
Vaguely related, I've heard some people call the pragmatist american movement a second enlightenment under the light of Darwin (e.g. the first section of Robert Brandom's reading of American Pragmatism). I think this is one reason why many writers on progress, such as Philip Kitcher, are also pragmatists in the american tradition.
Keep in mind, taking Darwin seriously definitely doesn't mean "use evolution as a normative guide to morality", like Herbert Spencer and the social darwinists thought, in fact, we should...
(4) Combat the appeal to nature fallacy. What is natural is not necessarily good. Think poison, disease, homelessness, or poverty, they're the natural state of things. And what's artificial is not necessarily bad. Think of medicine, housing, wealth, or technology, they're artificial. Here's a good summary of that point.
Going further, this basically means that stuff like transhumanism is on the table for our moral future. Human nature shouldn't be a hard constraint on our moral thinking, since we can shape our environments and ourselves.
(5) Building on that, moral intuitions change with material conditions. What was considered alien and weird, like interracial or gay marriage some decades ago, or transhumanism right now, will likely be seen as normal by future generations that grow up in such societies.
A sort-of corollary of that: we should be careful of value lock-in. We wouldn't want the value lock-in of racist, sexist, nepotist, or speciesist values performed by our ancestors upon us. So keep options open for changes in our future morality. Given the odds, we're probably committing moral atrocities we haven't even considered.
(6) The term "Progress" faces risks in the wrong hands.
When talking about moral progress, we sometimes make moral claims about societies or eras being better than others. And it seems reasonable that we might want to make all-things-considered judgements, particularly with relatively clear-cut statements like "Nazi beliefs were morally bad", or "Genocide is an atrocity". I think a theory of progress that is unable to make those claims is probably incomplete. Though some authors don't want to commit to general statements of progress.
But, of course, replace Nazis or those committing genocide with a more morally neutral contemporary culture and things can become problematic very quickly. What I fear about this is that historically, people have used claims about "bringing progress or civilization to barbarians" as a way to justify colonization or imperialism. By which I mean that theories of progress can potentially be co-opted for nefarious supremacist uses. We have seen many cases of trying to "export civilization" through force and colonization. To this, I want to make several points:
(1) Making truthful all-things-considered with these sweeping generalizations is hard. It can be already hard to compare two individuals across a single domain of morality, so imagine comparing an entire nation across all domains of morality. I believe it's possible, but much harder than the quick judgements we often make, and we should be careful.
(2) Progress doesn't mean imposing. Questions about imposition are further questions dealing with difficult matters such as political authority, international interventionism, and paternalism, among others. Given the awful track record of colonial expansion, I think it's extremely reasonable to have a strong heuristic or precautionary principle against foreign intervention by force.
(3) I fear that many lessons learned for good social movements can also be used by bad regressive social movements. The issue is that most of the mechanisms that drive moral progress have to do with value-neutral stuff. You can organize a social movement for moral good or for a moral atrocity, and most of the lessons learned can be used either way.
Hopefully there are some exceptions. The areas that are closer to reason and moral philosophizing might have more layers of defense towards avoiding being captured by rhetoric.
(7) Practical difficulties with working on the topic of Moral Progress.
Before I finish, I also wanted to give some practical takeaways for other researchers that might want to work on this topic in the future.
Research in this area is difficult because it is very interdisciplinary and broad. It has to put together knowledge from very disparate areas to merge into a big interdisciplinary synthetic theory. So far, I have drawn insights from primatology (e.g. Frans de Waal), anthropology (particularly theories of cultural evolution like Joseph Henrich), sociology, developmental psychology (e.g. Michael Tomasello), and many others. If we want more theorizing on progress, probably need interdisciplinary teams to work on it. I see some minor progress on this on researchers working under the broad umbrella of "cultural evolution", but there is more work on the cultural evolution of technologies than the cultural evolution of norms. I don't see people from Progress Studies doing a lot on the cultural evolution of norms, either.
Sadly, no single academic field matches up with the desired target. If you're a social scientist, you're allowed to use more quantitative tools, but you might be unable to make the required normative judgement calls for the normative side of how to trade-off different forms of moral progress. If you're a moral philosopher, you get more freedom to make normative claims of what is good or bad, but peer reviewers will probably push you away from using too much quantitative data.
This all means that I believe that it's hard to publish academic papers on this topic. For example, I find it difficult to write standalone papers, because if you write about moral circle expansion, somebody might disagree that moral progress is a thing because of anti-realist metaethics, or have a skeptical moral epistemology. So there's always a way to escape the debate if you aren't systematic in your work with a book. This is true unless you have a narrow, constrained insight. Some good examples of narrow piecemeal insights are Moody Adams (1999), Buchanan and Powell (2015, 2016) Hanno Sauer (2022), but eventually their smaller insights were expanded ended up becoming full-blown theories in books, such as Moody Adams (2023), Buchanan and Powell (2018), Hanno Sauer (2023).
So building a general theory of moral progress is hard. If you want more depth, my practical suggestion is to go for narrower questions dealing with particular aspects of progress instead. Here are some inspirations from posts I've seen in the EA Forum recently. I like questions such as:
- "How human governance institutions (e.g. AI) will keep up if AI leads to explosive growth?" (from Memo on some neglected topics by Lukas Finnveden)
- "Positive [utopic] visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us" (from Memo on some neglected topics by Lukas Finnveden)
- Civilizational collapses (Recommended for historians and economic historians, perhaps?)
- Particular questions within Historical Persistence and Contingency, as well as Most Important Historical Trends.
- International Tax Policy as a Potential Cause Area, by Tax Geek.
- The possibility of and moral importance of AI sentience.
- and a long etc.
These are still big ambitious questions that deal on moral and social progress indirectly. I'm sure there are plenty more. Consider the topics outlined by Effective Thesis. Feel free to mention some others and I might add them here.
Conclusion.
Those are some takeaways from my philosophical exploration of the topic of moral progress. They're not all of them, and I might develop some other themes later on.
In a few months, I will write on the empirical side of things, about the history and mechanisms that drive moral progress (and regress). I'm currently outlining that chapter. After that, there might be some suggestions about how to measure moral progress. And after that, some broad suggestions on what it means in terms of policy implications.
Contact Information.
If you work in this area within EA, moral philosophy, or just wanna chat, please get in touch with me. Feel free to DM me on the Forum. Also follow me on Twitter for fun EA memes and chitchat.
In your argument for 3, I think I accept the part that moral philosophising hasn't happened much historically. However, I can't really find the argument that it probably will in the future. Could you perhaps spell it out a bit more explicitly, or highlight where you think the case is being made please?
Great and interesting post though, I love seeing people rigourously exploring EA ideas and fitting them into the wider academic literature.
Sure! So I think most of our conceptual philosophical moral progress until now has been quite poor. If looked under the lens of moral consistency reasoning I outlined in point (3), cosmopolitanism, feminism, human rights, animal rights, and even longtermism all seem like slight variations on the same argument ("There are no morally relevant differences between Amy and Bob, so we should treat them equally").
In contrast, I think the fact that we are starting to develop cases like population ethics, infinite ethics, complicated variations of thought experiments (there are infinite variations of the trolley problem we could conjure up), that really test our limits of our moral sense and moral intuitions, hints at the fact that we might need a more systematic, perhaps computerized approach to moral philosophy. I think the likely path is that most conceptual moral progress in the future (in the sense of figuring out new theories and thought experiments) will happen with the assistance of AI systems.
I can't point to anything very concrete, since I can't predict the future of moral philosophy in any concrete way, but I think philosophical ethics might become very conceptually advanced and depart heavily from common-sense morality. I think this has been an increasing gap since the enlightenment. Challenges to common-sense morality have been slowly increasing. We might be at the early beginning of that exponential takeoff.
Of course, many of the moral systems that AIs will develop we will consider to be ridiculous. And some might be! But in other cases, we might be too backwards or morally tied to our biologically and culturally shaped moral intuitions and taboos to realize that it is in fact an advancement. For example, the Repugnant Conclusion in population ethics might be true (or the optimal decision in some sense, if you're a moral anti-realist), even if it goes against many of our moral intuitions.
The effort will take place in separating the wheat from the chaff. And I'm not sure if it will be AI or actual moral philosophers doing this effort of discriminating good from bad ethical systems and concepts.
You need a step beyond this though. Not just that we are coming up with harder moral problems, but that solving those problems is important to future moral progress.
Perhaps a structure as simple as the one that has worked historically will prove just as useful in the future, or, as you point out has happened in the past, wider societal changes (not progress in moral philosophy at an academic discipline) is the major driver. In either case, all this complex moral philosophy is not the important factor for practical moral progress across society.
Fair! I agree to that, at least until this point of time.
But I think there could be a time where we could have picked most of the "social low-hanging fruit" (cases like the abolition of slavery, universal suffrage, universal education), so there's not a lot for easy social progress left to do. At least comparatively, then investing on the "moral philosophy low-hanging fruit" will look more worthwhile.
Some important cases of philosophical moral problems that might have great axiological moral importance, at least under consequentialism/utilitarianism could be population ethics (totalism vs averagism), our duties towards wild animals, and the moral status of digital beings.
I think figuring them out could have great importance. Of course, if we always just keep them as just an interesting philosophical thought experiment and we don't do anything about promoting any outcomes, they might not matter that much. But I'm guessing people in the year 2100 might want to start implementing some of those ideas.
Executive summary: The author outlines seven preliminary philosophical opinions on moral progress to guide thinking, concluding skepticism is unwarranted, progress is complex with many conceptual distinctions, explicit moralizing hasn't driven most historical progress, but should be accelerated relative to technology for safety.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
There is a case against the notion of moral progress: while the moral circle as a general rule expands with the general empowerment of Mankind, we also become more efficient at oppression. The XVII century Europeans created the Reform and the Glorious Revolution, and the same time their expanded capacities allowed for the Transatlantic Slave Trade.
In my view, the European expansion was net negative until around the end of the XIX century, and while currently human progress is undeniable, when you consider animals, probably we are worse than ever. I am not a radical animalist: I have doubts on the sentience of even birds, but the expansion of animal farming of large vertebrates perhaps has undo in “total” moral terms the undeniable (and massive) human progress.
Good! I think I mostly agree with this and I should probably flag it somewhere in the main post.
I do agree with you, and I think it also shows what is a central point of the later parts of my thesis, when I will talk about the empirical ideas rather than philosophical ideas: that technologies (from shipbuilding, to the industrial revolution, to factory farming, to future AI) are more of a factor in moral progress or regress than ideologies. So many moral philosophers might have the wrong focus.
(Although many of those things I would call "social" progress rather than "moral" strictly speaking, because it was triggered by external factors (economic and technological change) rather than moral reflection. It's not that we became more cruel to animals in terms of our intentions, it's that we gained more power over them.)
Well, I hope philosophers are aware of how much ideas are super-structure of the productive forces and the social relations! I am far from being a Marxist, but I suppose this is a commonplace on modern Western historiography...
Outside of Marxism and continental philosophy (particularly the Frankfurt School and some Foucault), I think this idea has lost a lot of grip! So it has actually become a minority view or even awareness among current academic philosophers, particularly in the anglosphere.
However, I think it's a very useful idea that should make us look at our social arrangements (institutions, beliefs, morality...) with some level of initial suspicion. Luckily, some similar arguments (often called "debunking arguments" or "genealogical arguments") are starting to gain traction within philosophy again.