Hide table of contents

For this post, I'm going to use the scenario outlined in the science fiction book Seveneves by Neal Stephenson. It's a far-fetched scenario (and I leave out a lot of detail), but it sets up my point nicely, so bear with me. Full credit for the intro, of course, to Stephenson.

This is cross-posted from my blog.


Introduction

Humanity is in a near future state. Technology is slightly more advanced than it is today, and the International Space Station (ISS) is somewhat larger and more sophisticated. Long story short, the Moon blows up, and scientists determine humanity has two years before the surface of the Earth becomes uninhabitable for 5,000 years due to rubble bombardment.

Immediately, humanity works together to increase the size and sustainability of the ISS to ensure that humanity and its heritage (e.g. history, culture, animals and plants stored in a genetic format) can survive for 5,000 years to eventually repopulate the Earth. That this is a good thing to do is not once questioned. Humanity simply accepts as its duty that the diversity of life that exists today will continue at some point in the future. This is done with the acceptance that the inhabitants and descendants of the ISS will not have any easy life by any stretch of the imagination. But it is apparently their 'duty' to persevere.

The problem

It is taken as a given that stopping humanity from going extinct is a good thing, and I tend to agree, though not as strongly as some (I hold uncertainty about the expected value of the future assuming humanity/life in general survive). However, if we consider different ethical theories, we find that many come up with different answers to the question of what we ought to do in this case. Below I outline some of these possible differences. I say 'might' instead of 'will' because I've oversimplified things and if you tweak the specifics you might come up wit ha different answer. Take this as illustrative only.

Classical hedonistic utilitarian

If you think the chances of there being more wellbeing in the future are greater than there being more suffering (or put another way, you think the expected value of the future is positive), you might want to support the ISS.

Negative utilitarian

If you think all life on Earth and therefore suffering will cease to exist if the ISS plan fails, you might want to actively disrupt the project to increase the probability that happens. At the very least, you probably won't want to support it.

Deontologist

I'm not really sure what a deontologist would think of this, but I suspect that they would at least be motivated to a different extent than a classical utilitarian.

Person affecting view

Depending on how you see the specifics of the scenario, the 'ISS survives' case is roughly as good as the 'ISS fails' case.


Each of these ethical frameworks have significantly different answers to the question of 'what ought we do in this one specific case?' They also have very different answers to many current and future ethical dilemmas that are much more likely. This is worrying.

And yet, to my knowledge, there does not seem to be a concerted push towards convergence on a single ethical theory (and I'm not just talking about compromise). Perhaps if you're not a moral realist, this isn't so important to you. But I would argue that getting society at large to converge on a single ethical theory is very important, and not just for thinking about the great questions, like what to do about existential risk and the far future. It also possibly results in a lot of zero-sum games and a lot of wasted effort. Even Effective Altruists disagree on certain aspects of ethics, or hold entirely different ethical codes. At some point, this is going to result in a major misalignment of objectives, if it hasn't already.

I'd like to propose that simply seeking convergence on ethics is a highly neglected and important cause. To date, most of this seems to involve advocates for each ethical theory promoting their view, resulting in another zero-sum game. Perhaps we need to agree on another way to do this.

If ethics were a game of soccer, we'd all be kicking the ball in different directions. Sometimes, we happen to kick in the same direction, sometimes in opposite directions. What could be more important than agreeing on what direction to kick the ball and kicking it to the best possible world.

5

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since:

Well, there's a whole field of moral philosophy which is trying to do this, and they haven't been able to agree in the last couple thousand years of trying. They probably won't finish until at least the whole field of metaethics sorts out some of its own issues, but those have been going on for at least a few centuries without resolve. So things don't look too good!

There have certainly been paradigm shifts and trends in philosophy which we can point to for optimism. E.g., philosophers (whether religious or not) no longer consider deities to be a direct source of moral judgements and duties. Moral positions are successively pushed and refined by critiques from various directions, so a moral position formed these days - while still contentious - is at least prepared to defend itself from attacks and critiques from various directions and positions. And moral theories have generally grown more nuanced and complex in the history of philosophy.

Still, opinions are split and show no signs of resolving. The field has certainly learned to agree on certain issues like slavery and deviant sexuality, but new issues seem to crop up every time one of those gets solved, so that's not much consolation. Social psychology and experimental philosophy also don't do anything for resolving core disputes about morality, even though some people apparently think they do.

My suggestion is that we worry less about solving moral philosophy and worry more about solving the actual core issues at stake - how should the species continue, what sorts of lives are worth living, etc. Those are much more reasonable things to attack, and philosophical theories often agree on applied judgements even if the theories themselves differ. Moreover, many of our commonly held moral theories - having been developed in long-past social and historical contexts - don't actually provide clear guidance on how we should resolve some of these new futuristic debates.

My suggestion is that we worry less about solving moral philosophy and worry more about solving the actual core issues at stake

Moreover, many of our commonly held moral theories - having been developed in long-past social and historical contexts - don't actually provide clear guidance on how we should resolve some of these new futuristic debates.

Yes - thank you for posting this! I think it's really worth exploring the question of whether moral convergence is even necessarily a good thing. Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.

I don’t have a philosophy background, so please let me know if this take is way off course, but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up. I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.

In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here. While it’s easy to applaud folk who rigorously keep to a strict moral code, I really wonder whether it’s really the best way forward? For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?

Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!

I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions maximises pleasure and minimises pain). The answer may not be immediately clear, especially in tricky scenarios, and perhaps we can't be 100% certain about which action is best, but that doesn't mean there isn't an answer.

Regarding your last point about the downsides of taking utilitarianism to its conclusion, I think that (in theory at least) utilitarianism should take these into account. If applying utilitarianism harms your personal relationships and mental growth and ends up in a bad outcome, you're just not applying utilitarianism correctly.

Sometimes the best way to be a utilitarian is to pretend not to be a utilitarian, and there are heaps of examples of this in every day life (e.g. not donating 100% of your income because you may burn out, you may set an example that no one wants to reach... etc.).

Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’).

Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work.

My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utility for the people I donate to than is lost among me and my family”. Under a utilitarian framework, you would go with whatever you estimate to be correct, which in turn (due to your inability to forecast) would be based on only a first order approximation. Other frameworks that aren’t as based on forecasting (e.g. some version of deontology) can see this first order approximation and still suggest another action (which may, in turn, create more ‘good’ in the long-run). Going back to the overtime example, if you look past first-order effects in a utilitarian framework you can still build a reason against the whole ‘work overtime’ thing. A second order effect would be something like “but, if I do this too long, I’ll burn out, thus decreasing my long-term ability to donate”, and a third order effect would be something like “if I portray sacrificing my wellbeing as a virtue by continuing to do this throughout my life, it could change the views of those who see me as a role model in not-necessarily positive ways”, and so on. Luckily, as a movement, people have finally started to normalize an acceptance of some of the problematic second-order effects of the ‘work overtime’ thing, but it took a worryingly long time - and it certainly won't be the only time that our first order estimations will be overturned by more diligent thinking!

So, yes, if you work really hard to figure out second, third, etc. order effects, then versions of utilitarianism can be great – but relying too heavily on it for day-to-day decisions (at the expense of sub-frameworks that rely less on forecasting ability) may not work out as well as we’d hope, since figuring out those effects is terribly complicated – in many decisions, relying on a sub-framework that relies less on forecasting ability (e.g. some version of deontology) may be the best way forward. Many EAs realize some version of this, but I think it’s something that we should be more explicit about.

To draw it back in to the “is the moral parliament basically the same as Expected Moral Value”, I would say that it’s not. They are similar, but a key difference is the forecasting ability required for each: moral parliament can easily be used as a mental heuristic in cases where forecasting is impossible or misleading by focusing on which framework applies best for given situations, whereas EMV requires quite a bit of forecasting ability and calculation, and most importantly is incredibly biased against moral frameworks that are unable to quantify the expected good to come out of decisions (yes, the discussion of how to deal with ordinal systems does some to mitigate this, but even then there is a need to forecast effects implicit in the decision process). Hopefully that helps clarify my position, I should’ve probably been a bit more formal in my reasoning in my original post, but better late than never I guess!

I think it's really worth exploring the question of whether moral convergence is even necessarily a good thing.

I'd say it's a good thing when we find a relatively good moral theory, and bad when we find a relatively bad moral theory.

Even beyond moral convergence, I think we need to call into question its antecedent of ‘moral purity’ (i.e. defining and sticking to clear-cut moral principles) is even a good thing either.

Not sure what you mean here. Acting morally all the time does not necessarily mean having clear cut moral principles; we might be particularists, pluralists or intuitionists. And having clear cut moral principles doesn't imply that we will only have moral reasons for acting; we might have generally free and self-directed lives which only get restrained occasionally by morality.

but like kbog mentions many of the commonly cited moral schema don’t apply in every situation – which is why Nick Bostrom, for example, suggests adopting a moral parliament set-up.

I wouldn't go so far as to say that they 'don't apply,' rather that it's not clear what they say. E.g., what utilitarianism tells us about computational life is unclear because we don't know much about qualia and identity. What Ross's duties tell us about wildlife antinatalism is unclear because we don't know how benevolent it is to prevent wildlife from existing. Etc, etc.

I don't see how the lack of being able to apply moral schema to certain situations is a motivation for acting with moral uncertainty. After all, if you actually couldn't apply a moral theory in a certain situation, you wouldn't necessarily need a moral parliament - you could just follow the next-most-likely or next-best theory.

Rather, the motivation for moral uncertainty comes from theories with conflicting judgements where we don't know which one is correct.

I worry that pushing for convergence and moral clarity may oversimplify the nuance of reality, and may harm our effectiveness in the long run.

I'm not sure about that. This would have to be better clarified and explained.

In my own life, I’ve been particularly worried about the limits of moral purity in day-to-day moral decisions – which I’ve written about here.

You seem to be primarily concerned with empirical uncertainty. But moral theories aren't supposed to answer questions like "do things generally work out better if transgressors are punished." They answer questions about what we ought to achieve, and figuring out how is an empirical question.

While it is true that someone will err when trying to follow almost any moral theory, I'm not sure how this motivates the claim that we should obey non-moral reasons for action or the claim that we shouldn't try to converge on a single moral theory.

There are a lot of different issues at play here; whether we act according to moral uncertainty is different from whether we act as moral saints; whether we act as moral saints is different from whether our moral principles are demanding; whether we follow morality is different from what morality tells us to do regarding our closest friends and family.

For a specific example that probably applies to many of us, utilitarianism sometimes suggests that should you work excessive overtime at the expense of your personal relationships – but is this really a good idea? Even beyond self-care, is there a learning aspect (in terms of personal mental growth, as well as helping you to understand how to work effectively in a messy world filled with people who aren’t in EA) that we could be missing out of?

In that case, utilitarianism would tell us to foster personal relationships, as they would provide mental growth and help us work effectively.

People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).

However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.

How can we get everyone to agree on the best ethical theory?

Perhaps it would be easier to figure out what is the worst ethical theory possible? I don't recall ever seeing this question being asked, and it seems like it'd be easier to converge on.

Regardless of how negatively utilitarian someone is, almost everyone has an easier time intuiting the avoidance of suffering rather than the maximization of some positive principle, which ends up sounding ambiguous and somewhat non-urgent. I think suffering enters near mode easier than happiness does. It may be easier for humans to agree on what is the most anti-moral, badness-maximizing schema to adopt.

This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.

Any thoughts on the worst possible ethical theory?

In contemporary ethics, Derek Parfit has tried to find convergence in his 'On What Matters' books.

Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.

In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other theory he finds most plausible, the Act Utilitarianism of Singer and De-Lazari Radek.

Anyone trying to work on convergence should probably follow the fruitful debate surrounding 'On What Matters'.

A couple of thoughts, probably none of them very helpful, I'm afraid!

One effort to move the conversation on is to think about to act under moral uncertainty, which is what Will MacAskill did his doctorate on. As far as I understand it, you try to work out the Expected Moral Value (EMV) of an action by multiplying the credence you attach to an ethical view by how good that view says the outcome is. Long story short, acc. MacAskill, we all end up doing what total utilitarianism says because Total Util says there's so much value to keeping the species alive.

Second, I think you'll find people have been trying to find convergence on moral views since the dawn of moral arguments: people are trying to decide what to do and disagree because they have different values. On the basis that we've tried to do this for the whole of human history, I'd also doubt there is a tractable solution. Persuading each other doesn't seem to work. Maybe moral uncertainty theorising will help, but that remains to be seem.

Third, saying "it would be good if we agreed what was good" is rather question begging. Would it be good if we agreed what was good? Well, only if you think agreement is good. But why would anyone thing agreement would have instrinsic value, rather than, say, happiness? Also, what happens if I think what you want to do is stupid: why should I agree to that? This happens all the time in politics: it would be nice if people agreed, but they often don't, because they think there are things that are more important that agreement!

As far as I understand it, you try to work out the Expected Moral Value (EMV) of an action by multiplying the credence you attach to an ethical view by how good that view says the outcome is.

Small correction, he talks about choiceworthiness. And he seems to handle it a little differently from moral value. For one thing, not all moral systems have a clear quantification or cardinality of moral value, which would make it impossible to directly do this calculation. For another, he seems to consider all-things-considered choiceworthiness as part of the decisionmaking process. So under some moral theories, maybe your personal desires, legal obligations, or pragmatic interests don't provide any 'moral value', but they can still be a source of choiceworthiness.

Long story short, acc. MacAskill, we all end up doing what total utilitarianism says because Total Util says there's so much value to keeping the species alive.

No no no no no, he never says this. IIRC, he does say that keeping the species alive is better but just because almost every common moral theory says so. There are other theories besides utilitarianism which also assign huge weight to planetary/specieswide concerns.

MacAskill also says that there are cases where demanding views like utilitarianism dominate, like eating meat/not eating meat, where eating meat isn't particularly valuable even if you happen to be right. But not all cases will turn out this way.

(Unless you mean to refer to the specific scenario in the OP, in which case moral uncertainty seems likely to tell us to keep people alive, but if you're really confident in NU or just pessimistic about happiness and suffering, then maybe it wouldn't.)

Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!

Your third point is well taken - I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.

Curated and popular this week
Relevant opportunities