In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community and his own research Will is most enthusiastic about:

 

https://podcast.clearerthinking.org/episode/206/will-macaskill-what-should-the-effective-altruism-movement-learn-from-the-sbf-ftx-scandal

107

1
1

Reactions

1
1
Comments38
Sorted by Click to highlight new comments since:

In summarising Why They Do It, Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud". And that may well be true on average, but we know quite a lot about the details of this case, which I believe point us in a different direction.

In this case, the other defendants have said they knew what they're doing was wrong, that they were misappropriating customers' assets, and investing them. That weighs somewhat against the misconceptualisation hypothesis, albeit without ruling it out as a contributing factor.

On the other hand, we have some support for the bad apples idea. SBF has said:

In a lot of ways I don't really have a soul. This is a lot more obvious in some contexts than others. But in the end there's a pretty decent argument that my empathy is fake, my feelings are fake, my facial reactions are fake.

So I agree with Spencer, that SBF was at least deficient in affective experience, whether or not he was psychopathic.

Regarding cost-benefit analysis, I would tend to agree with Will that it's unlikely that SBF and company made a detailed calculation of the costs and benefits of their actions (and clearly they calculated incorrectly if they did), although the perceived costs and benefits could also be a contributing factor.

So based on the specific knowledge of the case, I think that the bad apples hypothesis makes more sense than the cost-benefit hypothesis and misconceptualisation hypotheses.

There is also a fourth category worth considering - whether SBF's views on side constraints were a likely factor - and I think overwhelmingly yes. Sure, as Will points out, SBF may have commented approvingly about a recent article on side constraints. But more recently, he referred to ethics as "this dumb game we woke Westerners play where we say all say the right shibboleths and so everyone likes us." Furthermore, if we're doing Facebook archaeology, we should also consider his earlier writing. In May 2012, SBF wrote about the idea of stealing to give:

I'm not sure I understand what the paradox is here. Fundamentally if you are going to donate the money to [The Humane League] and he's going to buy lots of cigarettes with it it's clearly in an act utilitarian's interest to keep the money as long as this doesn't have consequences down the road, so you won't actually give it to him if he drives you. He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one. Perhaps this was because you've done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism, because although you can do better things with the money than he can, you might run in to problems later if you keep in. Similarly, I could go around stealing money from people because I can spend the money in a more utilitarian way than they can, but that wouldn't be the utilitarian thing to do because I was leaving out of my calculation the fact that I may end up in jail if I do so.

... 

As others have said, I completely agree that in practice following rules can be a good idea. Even though stealing might sometimes be justified in the abstract, in practice it basically never is because it breaks a rule that society cares a lot about and so comes with lots of consequences like jail. That being said, I think that you should, in the end, be an act utilitarian, even if you often think like a rule utilitarian; here what you're doing is basically saying that society puts up disincentives for braking rules and those should be included in the act utilitarian calculation, but sometimes they're big enough that a rule utilitarian calculation approximates it pretty well in a much simpler fashion.

I'm sure people will interpret this passage in different ways. But it's clear that, at least at this point in time, he was a pretty extreme act utilitarian.

Taking this and other information on balance, it seems clear in retrospect that a major factor is that SBF didn't take side constraints that seriously.

Of course, most of this information wasn't available or wasn't salient in 2022, so I'm not claiming that we should have necessarily worried based on it. Nor am I implying that improved governance is not a part of the solution. Those are further questions.

Great comment. 

Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud".

I agree with your analysis but I think Will also sets up a false dichotomy. One's inability to conceptualize or realize that one's actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the "high integrity to really bad continuum", you have morally scrupulous people who constantly wonder whether their actions are wrong. On the other end of the continuum, you have pathological narcissists whose self-image/internal monologue is so out of whack with reality that they cannot even conceive of themselves doing anything wrong. That doesn't make them great people. If anything, it makes them more scary.

Generally, the internal monologue of the most dangerous types of terrible people (think Hitler, Stalin, Mao, etc.) doesn't go like "I'm so evil and just love to hurt everyone, hahahaha". My best guess is, that in most cases, it goes more like "I'm the messiah, I'm so great and I'm the only one who can save the world. Everyone who disagrees with me is stupid and/or evil and I have every right to get rid of them." [1]

Of course, there are people whose internal monologues are more straightforwardly evil/selfish (though even here lots of self-delusion is probably going on) but they usually end up being serial killers or the like, not running countries. 

Also, later when Will talks about bad applies, he mentions that “typical cases of fraud [come] from people who are very successful, actually very well admired”, which again suggests that "bad apples" are not very successful or not very well admired. Well, again, many terrible people were extremely successful and admired. Like, you know, Hitler, Stalin, Mao, etc. 

Nor am I implying that improved governance is not a part of the solution.

Yep, I agree. In fact, the whole character vs. governance thing seems like another false dichotomy to me. You want to have good governance structures but the people in relevant positions of influence should also know a little bit about how to evaluate character. 

  1. ^

    In general, bad character is compatible with genuine moral convictions. Hitler, for example, was vegetarian for moral reasons and “used vivid and gruesome descriptions of animal suffering and slaughter at the dinner table to try to dissuade his colleagues from eating meat”. (Fraudster/bad apple vs. person with genuine convictions is another false dichotomy that people keep setting up.)

Quote: (and clearly they calculated incorrectly if they did)

I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to "no fraud" as opposed to "safer amounts of fraud." The risk of getting busted from less extreme or less risky fraud would seem considerably less.

Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He'd need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to honor withdrawals. I guess there is still the risk of a leak.

I don't think we disagree much if any here -- I think pointing out that cost-benefit analysis doesn't necessarily lead to the "no fraud" result underscores the critical importance of side constraints!

He'd need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to honor withdrawals.

I think this significantly under-estimates the likelihood of "bank run"-type scenarios. It is not uncommon for financial institutions with backing for a substantial fraction of their deposits to still get run out due a simple loss of confidence snowballing.

Could you say more about that? I suggest that "substantial fraction" may mean something quite different in the context of a bank than here. In the scenario I described, the hypothetical exchange would need to see 80-90% of deposits demanded back in a world where the stocks/bonds had to be sold at a 25-50% loss. It could be higher if the exchange had come up with an opt-in lending program that provided adequate cover for not returning (say) 10-15% of the customers' funds on demand.

I'd also suggest that the "simple loss of confidence snowballing" in modern bank runs is often justified based on publicly-known (or discernable) information. I don't think it was a secret that SVB had bought a bunch of long-term Treasuries that sank in value as interest rates increased, and thus that it did not have the asset value to honor 100% of withdrawals. It wasn't a secret in ~2008 that banks' ability to honor 100% withdrawals was based on highly overstated values for mortgage-backed securities.

In contrast, as long as the secret stock/bond purchases remained unknown to outsiders, a massive demand for deposits back would have to occur in the absence of that kind of information. Unlike the traditional banking sector, other places to hold crypto carry risks as well -- even self-custody, which poses risks from hacking, hardware failure, forgetting information, etc. So people aren't going to withdraw unless, at a minimum, convinced that they had a safer place to hold their assets.

Finally, in conducting the cost/benefit analysis, the hypothetical SBF would consider that the potential failure mode only existed in scenarios where 80-90%+ of deposits had been demanded back. Conditional on that having happened, the exchange's value would likely be largely lost anyway. So the difference in those scenarios would be between ~0 and the negative effects of a smaller-scale fraud. If the hypothetical SBF thought the 80-90%+ scenario was pretty unlikely . . . .

(Again, all of this does not include the risk of the fraud leaking out or being discovered.)

Okay yes, I agree that a driver of bank runs is the knowledge that the bank usually can't cover all deposits, by design. So as long as you keep that fact secret you're much less likely to face a run.

I am now unsure how to reason about the likelihood of a run-like scenario in this case.

(This comment is basically just voicing agreement with points raised in Ryan’s and David’s comments above.) 

One of the things that stood out to me about the episode was the argument[1] that working on good governance and working on reducing the influence of dangerous actors are mutually exclusive strategies, and that the former is much more tractable and important than the latter. 

Most “good governance” research to date also seems to focus on system-level interventions,[2] while interventions aimed at reducing the impacts of individuals are very neglected, at least according to this review of nonprofit scandals

It is notable that all the preventive tactics that have been studied and championed—audits, governance practices, internal controls—are aimed at the organizational level. It makes sense to focus on this level, as it is the level that managers have most control over. Prevention can also be implemented at the individual and sectoral levels. Training of staff, job-level checks and balances, and staff evaluations could all help prevent violations with individual-level causes. Sector-level regulation and oversight is becoming common in many countries. We, therefore, encourage future research on preventive measures to take a multilevel perspective, or at least consider the neglected sectoral and individual levels.
 

Six years before the review quoted above, this article called for psychopathy screening for public leadership positions (which would have represented one potential approach to interventions at the “individual level,” to adopt the terminology of the review quoted above).[3]


This leads me to wonder: what are the most compelling reasons for the lack of research (so far) on interventions to reduce the impact of dangerous actors, and which (if any) of these reasons provide strong arguments against doing at least some research in this neglected area? I think there are lots of possible answers here,[4] but none of them seem strong enough to justify the relative lack of research on this area so far (relative to the scale of the problem).

  1. ^

    Here’s a quote from the episode (courtesy of Wei Dai's transcript) demonstrating this claim: 

    [Will MacAskill:] There's really two ways of looking at things: you might ask…is this a bad person - are we focusing on the character? Or you might ask…what oversight, what feedback mechanisms, what incentives does this person face? And yeah, one thing I've really taken away from this is to place even more weight than I did before on just the importance of governance, where that means the, you know, importance of people acting with oversight, with the feedback mechanisms and you know, with incentives to incentivize kind of good rather than bad behavior…

    I agree that all these aspects of governance are important, but disagree that working on these things would entirely protect an organization from the negative impacts of malevolent actors.

  2. ^

    To be clear, I am glad people are working on system-level solutions to low integrity and otherwise harmful behaviors, but I think it would be helpful if it wasn’t the *only* class of interventions that had substantial amounts of resources directed towards them.

  3. ^

    Interestingly, one of the real-life cases Boddy refers to in support of his argument is the Enron scandal, a case which was also covered in the book Will MacAskill was talking about, Why They Do It.

  4. ^

    Here are some of the reasons I’ve already thought about (listed roughly in order from most to least convincing to me as a reason to be pessimistic about this approach to risk reduction): potential lack of tractability; lower levels of social and political acceptability/feasibility; lack of existing evidence as to what methods work, to what extent, and in which contexts; and perhaps a perception that the problem (of dangerous actors) is small in scale. I’d be interested to know which (if any) of these reasons are the most important, and if there are other considerations I’m overlooking. Overall, despite these reasons against working on it, I still think this area is worth investigating to a greater extent than it has been to date.

Interesting discussion. In the interview, MacAskill mentioned Madoff as an example of the idea that it’s not about "bad apples." [1] Giving Madoff as an example in this context doesn’t make sense to me. But maybe MacAskill was meaning to say that it's not about "bad apples that are identified as such before/at the time of their fraud"? That would be the only interpretation that makes sense to me, because Madoff sounds like he really was a "bad apple" based on the info in Why They Do It.

Here's what Soltes says about Madoff in Why They Do It (quoted from the audiobook, with emphasis added):
 

[Madoff] cavalierly remarked, "The reality of it is my son couldn't stand up amongst the pressure anyhow, so he took his own life." In the many hours of conversations I had with Madoff, this statement stood out for its callousness. A father [who] couldn't understand the impact that his actions had on his own son...

...Madoff remains dispassionate even about the circumstances that are of the greatest significance. In September 2014, a colleague emailed me news that Madoff's second son, Andrew, had just died of cancer. As I was beginning to read the article, my office phone rang. I picked it up and was surprised to hear Madoff on the line. 

He had heard the news of his second son's death on the radio, and asked if I could read the obituary to him. Shaken by the fact that a father had called me to convey news of his son's death, I turned my attention to describing the news in the most compassionate way I could. I wasn't a professor or a researcher at that moment, just one person speaking to another. 

I read him a writer's article about his son's death. When I reached the end, I was at a loss to know what to say. Instinctively, as we often do when hearing of a death, I asked him how he was doing. 

Madoff responded, "I'm fine, I'm fine." After a brief pause, he said that he had a question for me. I thought he might want me to send a copy of the obituary to him or deliver a message on his behalf to someone. It wasn't that. Instead, he asked me whether I'd had a chance to look at the LIBOR rates we discussed in our prior conversation. 

This particular phone call with Madoff stuck with me more than any other. Shortly after finding out his son had died, Madoff wanted to discuss interest rates. He didn't lose a beat in the ensuing conversation, continuing to carry on an entirely fluid discussion on the arcane determinants of yields. It didn't seem as though he wanted to switch topics because he was struggling to compose himself, and it didn't seem as though he was avoiding expressing emotion because the news was so overwhelming. In some way, it almost seemed as though I was more personally moved by the death of Andrew in those moments than his father.

To a psychiatrist, Madoff displays many symptoms associated with psychopathy...while labels themselves are of little use, viewing Madoff through this lens helps place his prior actions and current rationalization of that behavior into context. 

Madoff interprets and responds to emotion differently from most people. Regardless of how close he got to his investors, his personal limitations enabled him to continue his fraud without remorse or guilt…Madoff has an inability to empathize with his investors…he never experienced the gut feeling that he needed to stop…he managed to create extraordinary suffering for his investors, his friends, even his family, while experiencing little emotional turmoil himself.

  1. ^

    Here’s a quote from MacAskill (emphasis added): 

    So the lesson that Eugene Soltes takes in his study of white collar crime that actually like the normal case of fraud like typical cases of fraud comes from people who are very successful, actually very well admired, really not the sort of people where it's like, Oh yeah, they were, everyone was talking all along about how this person's, you know, bad apple or not up to no good. Instead, you know Bernie Madoff even was the chair of NASDAQ...And so you know what he really emphasizes instead is importance of kind of good feedback mechanisms, again, because, you know, people are not often making this these decisions in this careful calculated way. Instead, is this like mindless, incredibly irrational decision...

I was curious why given Will's own moral uncertainty (in this interview he mentioned having only 3% credence in utilitarianism) he wasn't concerned about SBF's high confidence in utilitarianism, but didn't hear the topic addressed. Maybe @William_MacAskill could comment on it here?

One guess is that apparently many young people in EA are "gung ho" on utilitarianism (mentioned by Spencer in this episode), so perhaps Will just thought that SBF isn't unusual in that regard? One lesson could be that such youthful over-enthusiasm is more dangerous than it seems, and EA should do more to warn people about the dangers of too much moral certainty and overconfidence in general.

The 3% figure for utilitarianism strikes me as a bit misleading on it's own, given what else Will said. (I'm not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error theory from 1, and then renormalize his other credence so that they add up to 1 on their own. Secondly, there's the difference between utilitarianism where only the consequences of your actions matter morally, and only consequences for (total or average) pain and pleasure and/or fulfilled preferences matter as consequence, and consequentialism where only the consequences of your actions matter morally, but it's left open what those consequences are. My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5. This really matters in the current context, because many non-utilitarian forms of consequentialism can also promote maximizing in a dangerous way, they just disagree with utilitarianism about exactly what you are maximizing. So really, Will's credence in a view that, interpreted naively recommends dangerous maximizing is functionally (i.e. ignoring error theory in practice) more like 0.5 than 0.03, as I understood him in the podcast. Of course, he isn't actually recommending dangerous max-ing regardless of his credence in consequentialism (at least in most contexts*), because he warns against naivety.  

(Actually, my personal suspicion is that 'consequentialism' on its own is basically vacuous, because any view gives a moral preferability ordering over choices in situations, and really all that the numbers in consequentialism do is help us represent such orderings in a quick and easily manipulable manner, but that's a separate debate.)

*Presumably sometimes dangerous, unethical-looking maximizing actually is best from a consequentialist point of view, because the dangers of not doing so, or the upside of doing so if you are right about the consequences of your options outweigh the risk that you are wrong about the consequences of different options, even when you take into account higher-order evidence that people who think intuitively bad actions maximize utility are nearly always wrong. 

My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.

I think he meant conditional on error theory being false, and also on not "some moral view we've never thought of".

Here's a quote of what Will said starting at 01:31:21: "But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there's just no correct moral view. Very large faction to like some moral view we've never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don't think I'm."

Overall it seems like Will's moral views are pretty different from SBF's (or what SBF presented to Will as his moral views), so I'm still kind of puzzled about how they interacted with each other.

'also on not "some moral view we've never thought of".'

Oh, actually, that's right. That does change things a bit. 

I feel like it's more relevant what a person actually believes than whether they think of themselves as uncertain. Moral certainty seems directly problematic (in terms of risks of recklessness and unilateral action) only when it comes together with moral realism: If you think you know the single correct moral theory, you'll consider yourself justified to override other people's moral beliefs and thwart the goals they've been working towards.

By contrast, there seems to me to be no clear link from "anti-realist moral certainty in some subjectivist axiology" to "considers themselves justified to override other people's life goals." On the contrary, unless someone has an anti-social personality to begin with, it seems only intuitive/natural to me to go from "anti-realism about morality is true" to "we should probably treat moral disagreements between morally certain individuals more like we'd ideally treat political disagreements." How would we want to ideally treat political disagreements? I'd say we want to keep political polarization at a low, accept that there'll be view differences, and we'll agree to play fair and find positive-sum compromises. If some political faction goes around thinking it's okay to sabotage others or use their power unfairly (e.g., restricting free expression of everyone who opposes their talking points), the problem is not that they're "too politically certain in what they believe." The problem is that they're too politically certain that what they believe is what everyone ought to believe. This seems like an important difference! 

There's also something else that I find weird about highlighting uncertainty as a solution to recklessness/fanaticism. Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn't feel like a stable solution. (Not to mention that, as EAs tell themselves it's virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.) 

So, while I'm on board with cautioning against overconfidence and would probably concede that there's often a link between overconfidence and unjustified moral or metaehtical confidence, I feel like it's misguided in more than one way to highlight "moral certainty" as the thing that's directly bad here.

(You're of course free to disagree.) 

This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism about human nature. Perhaps what holds most EAs back at this point is a lack of proviolence—a lack of willingness to use violent means/cause great harm to others; I think this can be roughly summed up as “not being highly callous/malevolent”.

I think it’s important to reduce extremes of Utopianism and vile world in EA, which I feel are concerningly abundant here. Perhaps it is impossble/undesirable to completely eliminate them. But what might be most important is something that seems fairly obvious: try to screen out people who are capable of willfully causing massive harm (i.e., callous/malevolent individuals).

Based on some research I’ve done, the distribution of malevolence is relatively highly right-skewed, so screening for malevolence probably affects the fewest individuals while still being highly effective. It also seems that callousness and a willingness to harm others for instrumental gain are associated with abnormalities in more primal regions of the brain (like the Amygdala) and are highly resistant to interventions. Therefore, changing the culture is very unlikely to robustly “align” them. And intuitively, a willingness to cause harm seems to be the most crucial component, while the other components seem to be more channeling malevolence towards a more fanatical bent.

Sorry I’m kind of just rambling and hoping something useful comes out of this.

In general (whether realist or anti-realist), there is "no clear link" between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.

You suggest that it "seems only intuitive/natural" that an anti-realist should avoid being "too politically certain that what they believe is what everyone ought to believe." I'm glad to hear that you're naturally drawn to liberal tolerance. But many human beings evidently aren't! It's a notorious problem for anti-realism to explain how it doesn't just end up rubber-stamping any values whatsoever, even authoritarian ones.

Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that "moral realism" is "problematic" here strikes me as completely confused. You're implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/liberal contrast, not the realist/antirealist one. If you hold fixed people's first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.

That said, I very much agree about the "weirdness" of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, or -- in some conceivable cases, like an authoritarian AI -- a lack of sufficient respect for the value of others' autonomy. But the problem with someone who wrongly disregards others' autonomy is not that they ought to be "morally uncertain", but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. It's of course unsurprising that having bad moral views would be problematic!)

I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).

I'm not convinced by what you said about the effects of belief in realism vs anti-realism.

If you hold fixed people's first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.

Sure, but that feels like it's begging the question.

Let's grant that the people we're comparing already have liberal intuitions. After all, this discussion started in a context that I'd summarize as "What are ideological risks in EA-related settings, like the FTX/SBF setting?," so, not a setting where authoritarian intuitions are common. Also, the context wasn't "How would we reform people who start out with illiberal intuitions" – that would be a different topic.

With that out of the way, then, the relevant question strikes me as something like this:

Under which metaethical view (if any) – axiological realism vs axiological anti-realism – is there more of a temptation for axiologically certain individuals with liberal intuitions to re-think/discount these liberal intuitions so as to make the world better according to their axiology?

Here's how I picture the axiological anti-realist's internal monologue: 

"The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There's no tension here."

By contrast, here's how I picture the axiological realist:

"I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there's a sense in which that will be better for them than if I didn't do it. Perhaps this justifies going against the common-sense principles of liberalism, if I'm truly certain enough and am not self-deceiving here? So, I'm kind of torn..."

I'm not just speaking about hypotheticals. I think this is a dynamic that totally happens with some moral realists in the EA context. For instance, back when I was a moral realist negative utilitarian, I didn't like that my moral beliefs put my goals in tension with most of the rest of the world, but I noticed that there was this tension. It feels like the tension disappeared when I realized that I have to agree to disagree with others about matters of axiology (as opposed to thinking, "I have to figure out whether I'm indeed correct about my high confidence, or whether I'm the one who's wrong").

Sure, maybe the axiological realist will come up with a for-them compelling argument why they shouldn't impose the correct axiology on others. Or maybe their notion of "correct axiology" was always inherently about preference fulfillment, which you could say entails respecting autonomy by definition. (That said, if someone were also counting "making future flourishing people," as "creating more preference fulfillment," then this sort of axiology is at least in some possible tension with respecting the autonomy of present/existing people.) ((Also, this is just a terminological note, but I usually think of preference utilitarianism as a stance that isn't typically "axiologically realist," so I'd say any "axiological realism" faces the same issue with there being at least a bit of tension with belief in and and valuing autonomy in practice.))

When I talked about whether there's a "clear link" between two beliefs, I didn't mean that the link would be binding or inevitable. All I meant is that there's some tension that one has to address somehow.

That was the gist of my point, and I feel like the things you said in reply were perhaps often correct but they went past the point I tried to convey. (Maybe part of what goes into this disagreement is that you might be strawmanning what I think of as "anti-realism" with "relativism".)

Here's how I picture the axiological anti-realist's internal monologue: 

"The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There's no tension here."

By contrast, here's how I picture the axiological realist:

"I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there's a sense in which that will be better for them than if I didn't do it. Perhaps this justifies going against the common-sense principles of liberalism, if I'm truly certain enough and am not self-deceiving here? So, I'm kind of torn..."

Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:

(1) A possible anti-realist monologue: "I find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since I'm an anti-realist I don't feel compelled to abide by norms constraining my pursuit of what I most care about."

(2) A possible realist monologue: "The point of liberal norms is to prevent one person from imposing their beliefs on others. I'm confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, I'll restrict myself to permissible means of promoting the good. There's no tension here."

The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpful -- and rather bias-prone -- distraction.

That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat.

I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical "progress", done wrong, being a bad thing.

I think too much moral certainty doesn't necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory[1], but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I'm not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.

Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution.

This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we'd be able to better handle divergent values through compromise/cooperation.

(Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)

Yeah, there is perhaps a background disagreement between us, where I tend to think there's little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.

  1. ^

    Self-nitpick: divine command theory is actually a meta-ethical theory. I should have said "various religious moralities".

I don't think the "3% credence in utilitarianism" is particularly meaningful; doubting the merits of a particular philosophical framework someone uses isn't an obvious reason to be suspicious of them. Particularly not when Sam ostensibly reached similar conclusions to Will about global priorities, and MacAskill himself has obviously been profoundly influenced by utilitarian philosophers in his goals too.

But I do think there's one specific area where SBF's public philosophical statements were extremely alarming even at the time, and he was doing so whilst in "explain EA" mode. That's when Sam made it quite clear that if he had a 51% chance of doubling world happiness vs a 49% of ending it, he'd accept the bet....  a train to crazytown not many utilitarians would jump on and also one which sounds a lot like how he actually approached everything. 

Then again, SBF isn't a professional philosopher and never claimed to be, other people have said equally dumb stuff and not gambled away billions of other people's money, and I'm not sure MacAskill himself would even have read or heard Sam utter those words.

 Will's expressed public view on that sort of double or nothing gamble is hard to actually figure out, but it is clearly not as robustly anti as commonsense would require, though it is also clearly a lot LESS positive than SBF's view that you should obviously take it: https://conversationswithtyler.com/episodes/william-macaskill/

(I haven't quoted from the interview, because there is no one clear quote expressing Will's position, text search for "double" and you'll find the relevant stuff.) 

fwiw, I wouldn't generally expect "high confidence in utilitarianism" per se to be any cause for concern. (I have high confidence in something close to utilitarianism -- in particular, I have near-zero credence in deontology -- but I can't imagine that anyone who really knows how I think about ethics would find this the least bit practically concerning.)

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).

I basically agree with the lessons Will suggests in the interview, about the importance of better "governance" and institutional guard-rails to disincentivize bad behavior, along with warning against both "EA exceptionalism" and SBF-style empirical overconfidence (in his ability to navigate risk, secure lasting business success without professional accounting support or governance, etc.).

I think it would be a big mistake to conflate that sort of "overconfidence in general" with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It's just very obvious that you can have the latter without the former, and it's the former that's the real problem here.

[See also: 'The Abusability Objection' at utilitarianism.net]

Note that Will does say a bit in the interview about why he doesn't view SBF's utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).

I disagree with Will a bit here, and think that SBF's utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.

I basically agree with the lessons Will suggests in the interview, about the importance of better "governance" and institutional guard-rails to disincentivize bad behavior

I'm pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I'm pretty confused and don't know how relevant this is, but it seems worth pointing out.)

I think it would be a big mistake to conflate that sort of "overconfidence in general" with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It's just very obvious that you can have the latter without the former, and it's the former that's the real problem here.

My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.

BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and "something nobody has thought of yet", but I feel like his credence for "something like utilitarianism" is too low. I'm curious to understand both why your credence for it is so high, and why his is so low.)

We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I'm much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.

I agree it'd be fun for us to explore the disagreement further sometime!

I don't necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So it's not refuted by "the correct understanding of consequentialism mostly bars  things with an ends justifies the means vibe in practice" or "actually, any sane view allows that sometimes it's permissible to do very harmful things to prevent a many orders of magnitude greater harm". And by "somewhat plausible" I mean just that. I wouldn't be THAT shocked to discover this was false, my credence is like 95% maybe? (1 in 20 things happen all the time.)  And the claim is correlational, not causal (maybe both endorsement of utilitarianism and ends-justifies-the-means type behaviour are both caused partly by prior intuitive endorsement of ends-justifies-the-means type behaviour, and adopting utilitarianism doesn't actually make any difference, although I doubt that is entirely true.) 

I don't necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.

I think it's fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.

I also think it's bad for people to engage in "moral profiling" (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative fears of this sort.

I just think it's very obvious that if you're worried about naive instrumentalism, the (morally and intellectually) correct response is to warn against naive instrumentalism, not other (intrinsically innocuous) views that you believe to be correlated with the mistake.

[See also: The Dangers of a Little Knowledge, esp. the "Should we lie?" section.]

Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse "in principle, the ends justify the means, just not in practice" over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I don't worry too much, but that's because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I don't think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldn't (in any normal context) lie about their opinions. 

Thanks for sharing; would be great if there was a transcript! 

Made a transcript with Microsoft Word.

Thanks for putting that together @Wei Dai! Out of curiosity, how did you make that transcript?

I followed the instructions here.

I've found Mac Whisper to be the most accurate (haven't tested many though), but it doesn't distinguish between speakers or do any formatting.

Pro-tip, if you have an iPhone, Apple's podcast app now has transcriptions for most podcasts, including this one :) 

Thanks! Perhaps you or someone else with an iPhone could copy-paste it.

Ah yes, that would be handy. I can't see a way of doing that, unfortunately. 

Thanks very much to both of you for having this difficult conversation, and handling it with such professionalism.

Cards on the table, I agree with MacAskill about character vs structure/governance. So to me the 30 minutes of trying to get inside Bankman-Fried's head seemed a little fruitless. Though I guess there's something fascinating about trying to get into bad people's heads.

I would have liked more questions about due diligence. MacAskill says that he and Bankman-Fried chatted in early 2021 and then again with Beckstead after the FTX Foundation. That's really useful information and context. But he didn't say more about, for example, what due diligence had been done by Beckstead, whether MacAskill did any further due diligence, what sort of questions he asked, or what the key considerations/evidence were.

For example, at some point MacAskill says "I heard they didn't even have a board" implying that he only found out after the FTX collapse. However, this seems like it should have been an obvious question to ask in late 2021 / early 2022. Indeed later he says the lack of a board in retrospect was a "very legible and predictable" signal. Similarly, MacAskill also says "if there'd been this discussion about Sam's character in 2021", which implies there wasn't much of a discussion. In general, I came away continuing to want to know a lot more about the questions and discussions that went into supporting the FTX Foundation in October-December 2021.

It seems very likely to me that Bankman-Fried, Nishad Singh and other FTX leaders would have lied to Beckstead and the rest of the FTX Foundation team, like MacAskill was lied to, and the employees/investors/media/regulators/etc were lied to. They would have portrayed FTX as having strong incentives to be the ' good guys as they intended to give away their money and wanted to be regulated'. But some due diligence signals are harder to fake.

Curated and popular this week
Relevant opportunities