L

LanceSBush

27 karmaJoined

Comments
9

The descriptive task of determining what ordinary moral claims mean may be more relevant to questions about whether there are objective moral truths than is considered here. Are you familiar with Don Loeb's metaethical incoherentism? Or the empirical literature on metaethical variability? I recommend Loeb's article, "Moral incoherentism: How to pull a metaphysical rabbit out of a semantic hat." The title itself indicates what Loeb is up to.

Whoops. I can see how my responses didn't make my own position clear.

I am an anti-realist, and I think the prospects for identifying anything like moral truth are very low. I favor abandoning attempts to frame discussions of AI or pretty much anything else in terms of converging on or identifying moral truth.

I consider it a likely futile effort to integrate important and substantive discussions into contemporary moral philosophy. If engaging with moral philosophy introduces unproductive digressions/confusions/misplaced priorities into the discussion it may do more harm than good.

I'm puzzled by this remark:

I think anything as specific as this sounds worryingly close to wanting an AI to implement favoritepoliticalsystem.

I view utilitronium as an end, not a means. It is a logical consequence of wanting to maximize aggregate utility and is more or less a logical entailment of my moral views. I favor the production of whatever physical state of affairs yields the highest aggregate utility. This is, by definition, "utilitronium." If I'm using the term in an unusual way I'm happy to propose a new label that conveys what I have in mind.

It's certainly possible that this is the case, but looking for the kind of solution that would satisfy as many people as possible certainly seems like the thing we should try first and only give it up if it seems impossible, no?

Sure. That isn't my primary objection though. My main objection is that that even if we pursue this project, it does not achieve the heavy metaethical lifting you were alluding to earlier. It doesn’t demonstrate nor provide any particularly good reason to regard the outputs of this process as moral truth.

Well, the ideal case would be that the AI would show you a solution which it had found, and upon inspecting it and considering it through you'd be convinced that this solution really does satisfy all the things you care about - and all the things that most other people care about, too.

I want to convert all matter in the universe to utilitronium. Do you think it is likely that an AI that factored in the values of all humans would yield this as its solution? I do not. Since I think the expected utility of most other likely solutions, given what I suspect about other people's values, is far less than this, I would view almost any scenario other than imposing my values on everyone else to be a cosmic disaster.

Hi Kaj,

Even if we found the most agreeable available set of moral principles, that amount may turn out not to constitute the vast majority of people. It may not even reach a majority at all. It is possible that there simply is no moral theory that is acceptable to most people. People may just have irreconcilable values. You state that:

“For empirical facts we can come up with objective tests, but for moral truths it looks to me unavoidable - due to the is-ought gap - that some degree of "truth by social consensus" is the only way of figuring out what the truth is, even in principle.”

Suppose this is the best we can do. It doesn’t follow that the outputs of this exercise are “true.” I am not sure in what sense this would constitute a true set of moral principles.

More importantly, it is unclear whether or not I have any rational or moral obligation to care about the outputs of this exercise. I do not want to implement the moral system that most people find agreeable. On the contrary, I want everyone to share my moral views, because this is what, fundamentally, I care about. The notion that we should care about what others care about, and implement whatever the consensus is, seems to presume a very strong and highly contestable metaethical position that I do not accept and do not think others should accept.

Thanks for the excellent reply.

Greene would probably not dispute that philosophers have generally agreed that the difference between the lever and footbridge cases are due to “apparently non-significant changes in the situation”

However, what philosophers have typically done is either bit the bullet and said one ought to push, or denied that one ought to push in the footbridge case, but then feel the need to defend commonsense intuitions by offering a principled justification for the distinction between the two. The trolley literature is rife with attempts to vindicate an unwillingness to push, because these philosophers are starting from the assumption that commonsense moral intuitions track deep moral truths and we must explicate the underlying, implicit justification our moral competence is picking up on.

What Greene is doing by appealing to neuroscientific/psychological evidence is to offer a selective debunking explanation of some of those intuitions but not the others. If the evidence demonstrates that one set of outputs (deontological judgments) are the result of an unreliable cognitive process, and another set of outputs (utilitarian judgments) are the result of reliable cognitive processes, then he can show that we have reason to doubt one set of intuitions but not the other, provided we agree with his criteria about what constitutes a reliable vs. an unreliable process. A selective debunking argument of this kind, relying as it does on the reliability of distinct psychological systems or processes, does in fact turn on the empirical evidence (in this case, on his dual process model of moral cognition).

[But nobody believes that judgements are correct or wrong merely because of the process that produces them.]

Sure, but Greene does not need to argue that deontological/utilitarian conclusions are correct or incorrect, only that we have reason to doubt one but not the other. If we can offer reasons to doubt the very psychological processes that give rise to deontological intuitions, this skepticism may be sufficient to warrant skepticism about the larger project of assuming that these intuitions are underwitten by implicit, non-obvious justifications that the philosopher’s job is to extract and explicate.

You mention evolutionary debunking arguments as an alternative that is known “without any reference to psychology.” I think this is mistaken. Evolutionary debunking arguments are entirely predicated on specific empirical claims about the evolution of human psychology, and are thus a perfect example of the relevance of empirical findings to moral philosophy.

[Also it's worth clarifying that Greene only deals with a particular instance of a deontological judgement rather than deontological judgements in general.]

Yes, I completely agree and I think this is a major weakness with Greene’s account.

I think there are two other major problems: the fMRI evidence he has is not very convincing, and trolley problems offer a distorted psychological picture of the distinction between utilitarian and non-utilitarian moral judgment. Recent work by Kahane shows that people who push in footbridge scenarios tend not to be utilitarians, just people with low empathy. The same people that push tend to also be more egoistic, less charitable, less impartial, less concerned about maximizing welfare, etc.

Regarding your last point two points: I agree that one move is to simply reject how he talks about intuitions (or one could raise other epistemic challenges presumably). I also agree that training in psychology/neuroscience but not philosophy impairs one's ability to evaluate arguments that presumably depend on competence in both. I am not sure why you bring this up though, so if there was an inference I should draw from this help me out!

I agree that defining human values is a philosophical issue, but I would not describe it as "not a psychological issue at all." It is in part a psychological issue insofar as understanding how people conceive of values is itself an empirical question. Questions about individual and intergroup differences in how people conceive of values, distinguish moral from nonmoral norms, etc. cannot be resolved by philosophy alone.

I am sympathetic to some of the criticisms of Greene's work, but I do not think Berker's critique is completely correct, though explaining why I think Greene and others are correct in thinking that psychology can inform moral philosophy in detail would call for a rather titanic post.

The tl;dr point I'd make is that yes, you can draw philosophical conclusions from empirical premises, provided your argument is presented as a conditional one in which you propose that certain philosophical positions are dependent on certain factual claims. If anyone else accepts those premises, then empirical findings that confirm or disconfirm those factual claims can compel specific philosophical conclusions. A toy version of this would be the following:

P1: If the sky is blue, then utilitarianism is true. P2: The sky is blue. C: Therefore, utilitarianism is true.

If someone accepts P1, and if P2 is an empirical claim, then empirical evidence for/against P2 bears on the conclusion.

This is the kind of move Greene wants to make.

The slightly longer version of what I'd say to a lot of Greene's critics is that they misconstrue Greene's arguments if they think he is attempting to move straight from descriptive claims to normative claims. In arguing for the primacy of utilitarian over deontological moral norms, Greene appeals the presumptive shared premise between himself and his interlocutors that, on reflection, they will reject beliefs that are the result of epistemically dubious processes but retain those that are the result of epistemically justified processes.

If they share his views about what processes would in principle be justified/not justified, and if he can demonstrate that utilitarian judgments are reliably the result of justified processes but deontological judgments are not, then he has successfully appealed to empirical findings to draw a philosophical conclusion: that utilitarian judgments are justified and deontological ones are not. One could simply reject his premises about what constitutes justifed/unjustified grounds for belief, and in that case his argument would not be convincing. I don't endorse his conclusions because I think his empirical findings are not compelling; not because I think he's made any illicit philosophical moves.

I am a psychology PhD student with a background in philosophy/evolutionary psychology. My current research focuses on two main areas: effective altruism and the nature of morality and in particular the psychology of metaethics. My motivation for pursuing the former should be obvious, but my rationale for pursuing the latter is in part self-consciously about the third bullet point, "Defining just what it is that human values are." More basic than even defining what those values are, I am interested in what people take values themselves to be. For instance, we do not actually have good data on the degree to which people regard their own moral beliefs as objective/relative, how common noncognitivist or error theoretic beliefs are in lay populations, etc.

Related to the first point, about developing an AI safety culture, there is also the matter of what we can glean psychologically about how the public likely to receive AI developments. Understanding how people generally perceive AI and technological change more broadly could provide insight that can help us anticipate emerging social issues that result from advances in AI and improve our ability to raise awareness about and increase receptivity to concerns about AI risk among nonexperts, policymakers, the media, and the public. Cognitive science has more direct value than areas like mine (social psychology/philosophy) but my areas of study could serve a valuable auxiliary function to AI safety.

Tom, that isn't the only way the term "moral anti-realism" is used. Sometimes it is used to refer to any metaethical position which denies substantive moral realism. This can include noncognitivism, error theory, and various forms of subjectivism/constructivism. This is typically how I use it.

For one thing, since I endorse metaethical variability/indeterminacy, I do not believe traditional descriptive metaethical analyses provide accurate accounts of ordinary moral language anyway. I think error theory works best in some cases, noncognitivism (perhaps, though not plausibly) in others, and various forms of relativism in others. What this amounts to is that I think all moral claims are either (a) false (b) nonsense or (c) trivial; in the latter sense, by "trivial" I mean they lack objective prescriptivity, "practical oomph" (as Richard Joyce would put it) or otherwise compel or provide reasons for action independent of an agent's goals or interests. In other words, I deny that there are any mind-independent moral facts. I'm honestly not sure why moral realism is taken very seriously. I'd be curious to hear explanations of why.

Hi Evan,

I study philosophy and would identify as a moral anti-realist. Like you, I am generally inclined to regard attempts to refer to moral statements as true or false as (in some cases) category mistakes, though in other cases I think they are better translated as cognitive but false (i.e. some moral discourse is captured by one or more error theories), and in other cases moral claims are both coherent and true, but trivial - for instance, a self-conscious subjectivist who deliberately uses moral terms to convey their preferences. Unfortunately, I think matters are messier than this, in that I don't even think ordinary moral language has any determinate commitment, much of the time, to any particular metaethical stance, so there is no uniform, definitive way of stating what moral terms even mean - because they don't mean one thing, and often simply have nothing to do with the sorts of meanings philosophers want to extract out of them. This position is known as metaethical variability/indeterminacy.

Even though I reject that morality is about anything determinate and coherent, I also endorse utilitarianism insofar as I take it to be an accurate statement of my own values/preferences.

So, I suppose you can add at least one person to the list of people who are EAs that share something roughly in line with your metaethical views.