Hide table of contents

Meta

The following is a personal account of my (direct and indirect) interactions with Sam Bankman-Fried, which I wrote up in early/mid-November when news came out that FTX had apparently stolen billions of dollars from its customers.

I’d previously intended to post a version of this publicly, on account of how people were worried about who knew what when, but in the writing of it I realized how many of my observations were second-hand and shared with me in confidence. This ultimately led to me shelving it (after completing enough of it to extract what lessons I could from the whole affair).

I’m posting this now (with various details blurred out) because early last week Rob Bensinger suggested that I do so. Rob argued that accounts such as this one might be useful to the larger community, because they help strip away a layer of mystery and ambiguity from the situation by plainly stating what particular EAs knew or believed, and when they knew or believed it.

This post is structured as a chronological account of the facts as I recall them, followed by my own accounting of salient things I think I did right and wrong, followed by general takeaways.

Some caveats:

  1. I don’t speak for any of the people who shared their thoughts or experiences with me. Some info was shared with me in confidence, and I asked those people for feedback and gave them the opportunity to veto this post, and their feedback made this post better, but their lack of a veto does not constitute approval of the content. My impression is that they think I have some of the emphasis and framings wrong (but it’s not worth the time/attention it would take to correct).
  2. This post consists of some of my own processing of my mistakes. It's not a reaction to the whole FTX affair. (My high-level reaction at the time was one of surprise, anger, sadness, and disappointment, with tone and content not terribly dissimilar from Rob Wiblin’s reactions, as I understood them.)[1]
  3. The genre of this essay is "me accounting for how I failed my art, while comparing myself to an implausibly high standard". I'm against self-flagellation, and I don't recommend beating yourself up for failing to meet an implausibly high standard.

    I endorse comparing yourself to a high standard, if doing so helps you notice where your thinking processes could be improved, and if doing so does not cause you psychological distress.
  4. My original draft of this post started with a list of relatively raw observations. But the most salient raw observations were shared in confidence, and much of the remainder felt like airing personal details unnecessarily, which feels like an undue violation of others’ privacy. As such, I’ve kept the recounting somewhat vague.
  5. I am not particularly recommending that others in the community who had qualms about Sam write up a similarly thorough account. I was pretty tangential to the whole affair, which is why I can fit something this thorough into only ~7k words, and is why it doesn’t seem to me like a huge invasion of privacy to post something like this (especially given what I’m keeping vague).

    Hopefully this helps people get a better sense of the degree to which at least one EA had at least some warning signs about Sam, and what sort of signs those were. Maybe it will even spark some candid conversation, as I expect might be healthy, if the discussion quality is good.
     

Short version

My firsthand interactions with Sam were largely pleasant. Multiple of my friends had bad experiences with him, though. Some of them gave me warnings.

In one case, a friend warned me about Sam and I (foolishly) misunderstood the friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing good ends, and wound up uncertain.

This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to try to form a unified picture of the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill”.

(And even if I hadn’t made this error, I don’t think I would’ve been able to change much, though I might have been able to change a little.)

 

Recounting

Mid 2015-17(?)

The very first time I met Sam was at the afterparty of an EA Global. I forget which one. If memory serves, somebody introduced me to him as a person who was a staunch causal decision theorist, and someone who didn’t buy this logical decision theory stuff. We launched into an extended argument, and did not come to any consensus. This is the context in which I formed my first impressions.

 

Early 2018

Sam had moved into a group house a few blocks away from my house, while (co)founding Alameda Research. He employed a bunch of my friends, and (briefly) worked in the same office building as me.

One evening, a friend and I dropped by the group house and hung out. Sam and some other people were there, and a bunch of us stayed up late chatting about a wide range of topics. I found the conversation pleasant (and, in particular, didn’t get any bad vibes from Sam, and in fact enjoyed the spirit of candor reflected in his probing).

In early 2018, I heard secondhand from a bunch of my friends about a major conflict at Alameda Research resulting in a mass exodus from the org. A bunch of my friends said that they’d been burned in the conflict, and various people seemed bitter about their interactions with Sam in particular.

At the time, my only response was to file the observation away and offer sympathy. I didn’t pry for details. (In part because my default policy regarding community drama is to ignore it, on the theory that most drama is distracting and unimportant, and drama needs attention to breathe. And in part because it looked to me at a glance like Alameda was dead, which lowered my probability that a response was necessary.)

 

Late 2020

It wasn't until late 2020, when I was hanging out with one such friend, that I got a sense that the Alameda conflict had been much worse than I'd previously thought.

I was told some stories that gave me pause, though I continued to avoid prying about the details. Some of those details, plus bits and pieces of other accounts, gave me the overall impression that Sam is unfair, socially ruthless, and willing to betray handshake agreements.[2]

There were some stories that seemed to me to cross a “Not Cool” line, and I encouraged my friend to speak up publicly about what happened, and offered to signal-boost them and back them up. They declined, and noted that they'd already told a variety of other community-members (to no effect).

During that 2020 interaction, my friend asked whether I thought that Sam experiencing great success would be good or bad, and I said that my best guess is it would be good.

At the time, I made the error of conflating my friend’s question with something more like "Do you think Sam is secretly in this business for personal glorification, and would reveal his true selfish colors upon attaining great wealth and power, or do you think that he is ultimately trying to do good?”

I answered this alternative question, and thereby gave mixed signals to a friend who was perhaps probing what sort of conviction I'd have in my support of them. Oops.

 

Late 2020 - early 2021

In the period between my aforementioned meeting with my friend, and a period in early 2021 where I have some chat logs, I heard more about Sam.

Unfortunately, I don’t quite know what I learned when, nor who I learned it from. (Although I remember at least one piece coming from a friend by way of song.) This was the period when Ben Delo was being charged with some sort of cryptocurrency regulation violation, and I heard a variety of rumors about people from a variety of places, some of which might have conflated Ben Delo with Sam; or I might have mixed up the two in my recollection later.

Things I vaguely recall hearing (or maybe mishearing) in this time period (including possibly at the end of my late-2020 visit, my memory is fuzzy here):

  1. Sam was now a decabillionaire.
  2. Alameda Research had survived, and moved to Hong Kong.
  3. Alameda Research had moved to Hong Kong because the US crypto regulations were too strict.
  4. Alameda Research had committed KYC regulation violations, and its executives were no longer welcome in the US (and might be apprehended if they attempted to re-enter).
  5. Alameda Research had changed its name to FTX.

There might’ve been others. I didn’t pay particularly close attention. Note that not all of these are true. I’m currently fairly confident that (1) and (2) are true (Wikipedia says Alameda Research moved in 2019), and (5) is false. My guess is that (3) is true and (4) was conflating Sam with Ben Delo? But I haven’t checked in detail.

I do recall some friends and family observing that my community seemed adjacent to the cryptocurrency community, and wanting to talk about it, sometime in this time period.

I recall saying (to a family member, using “hyperbolic/provocative” tone-markers) something along the lines of “Yeah, I have a friend who got into crypto trading and did everything by the books and wound up with a net worth of tens/hundreds of millions of dollars. And I have another friend who played fast and loose with the regulations, whose net worth is now ten billion dollars. From this, we learn that the cost of doing everything completely by the books is about ten billion dollars, because ~ten billion dollars minus ~a hundred million dollars is of course ~ten billion dollars.” (I also recall repeating this musing at least twice from cache, to at least two different friends, in mid 2021.)

 

Early 2021

I have some chat logs from early 2021 (not too long after I learned that Sam was very wealthy now) where a friend asked for my take on Sam (in the context of whether to engage with his altruistic endeavors), and I said I was (literal quote) "a little wary of him, on account of thinking he has fewer principles than most community members". I pointed my friend towards a mutual friend who'd had good interactions with Sam and a mutual friend who'd had bad interactions with Sam.

At about the same time, MIRI sold some MKR tokens (that had been donated to us) to Alameda Research because it was tricky to convert the MKR to USD on Coinbase Exchange, and Alameda had previously mentioned an interest in helping EAs with unwieldy crypto transactions. I interacted with Sam some at this time, to briefly get his take on some crypto questions while the channel was open.

 

Early 2022

Early in 2022, I swung by the FTX offices to briefly visit with some folks associated with the FTX Future Fund, while I was in the Bahamas for other business.

My next interaction with Sam was in a group setting, when we were both at an EA group-house in Washington, D.C. simultaneously. We hung out, it was a good time. I recall having some lingering discomfort around the “hey, I hear you were mean to my friends” thing, but not enough to bring it up out-of-the-blue in a group context (and it’s hard to say how much of a flinch there really was, on account of hindsight bias).

 

Shortly before November 8th, 2022

During the period where FTX was looking pretty shaky (so probably November 5th, 6th, or 7th?), I was coincidentally introduced to a cryptocurrency trader. He heard that I had some acquaintance with Sam, and said that something was up with FTX, and asked whether I thought Sam had stolen customers’ money. I said “I’ve heard that he’s often a prick, and that he’s skirted a variety of regulations, but I’d be preeeettty surprised if he didn’t have the customer money”.

(I’m glad that I was asked the question point-blank out-loud on Nov 5-7, because otherwise I think there’s a decent chance that hindsight bias today would cause me to inflate my memories of all the reasons I had for suspicion, and that I’d have forgotten how, on balance, I was surprised that Sam didn’t have the customer money, even in the wake of early suspicion.)

In the same conversation, I also vaguely recall reporting that I thought Sam was trying to legitimately do good with his money, when queried about whether the “EA” thing was legit.

(Embarrassingly, I don’t think it was until those conversations that I finally learned that Alameda Research had survived. My previous hypothesis was that it’d burned down, and FTX had risen from its ashes.)

 

Accounting

I’ll catalog some places where I’m either particularly pleased or displeased with my performance, in rough chronological order. Later in this post, I’ll record the general lessons I’ve managed to extract.[3]

 

I didn’t press for details

I had at least two opportunities (in early 2018 and in late 2020) to ask my friends for more details about their bad experiences, and I neither sought details then nor came back with questions later.

I was dissuaded from poking around in part by the fact that I was under the impression that my friend was under some sort of non-disparagement agreement.

Reflecting now, my current guess is that it was an error for me to not pry just because I thought non-disparagement agreements were involved.

I think that it would have been a good idea for me to explicitly encourage my friend to tell me more, insofar as my friend was willing to trust me to keep things confidential, and insofar as this was within the bounds of their idealized agreements (acknowledging that Earth is often pretty messed up about what the paper contracts literally say). Knowing more would have made it more likely that I could connect the dots and respond better (in ways that didn’t betray their confidence).

 

I failed to listen properly to my friends

When my friend asked me whether I thought Sam achieving great success would be good or bad, I was not consciously tracking the difference between the hypothesis "Sam is amoral and will intentionally use power for ill ends, if he acquires it" and the hypothesis "Sam is reckless and harmful in his pursuits, such that ill ends will result from him acquiring power, regardless of whether or not he ultimately has altruistic intent”. This is a foolish and basic mistake that I made multiple times. Oops.

I misheard my friend as arguing for the former, and weighed their arguments against my impression that Sam in fact had altruistic intent at heart, and wound up feeling uncertain (as evidenced by the later chat logs).

Commenting on an earlier draft of this post, my friend relayed to me the experience of trying to warn community members that Sam exhibited sketchy behavior, only to be rebuffed by claims to the effect of ~"if there are going to be sociopaths in power, they might as well be EA sociopaths".

I don't doubt my friend's claim. I didn’t see other people respond to their objections (and, if I understand correctly, I was only late and incidental to their overall experience). Separately, I can see how my own response fits into that overall impression.

My recollections don't support the hypothesis that I personally made the specific error of thinking that the sociopaths in power might as well be EA sociopaths (and I don’t know to what degree my friend read me as saying this), but human brains are not entirely trustworthy artifacts when it comes to memories that paint the rememberer in a bad light, so do with that what you will.

On my own recollections, what happened in my case is more like: I believe ~nobody is evil and ~everything is broken, and when I see humans accused of evil I get all defensive and argue that they're merely broken. (I have exhibited this pattern in a few other instances, which I'm now taking a second look at.)

In this case, in my defensiveness regarding Sam having altruistic intent, and my decision not to direct much attention to this topic, I entirely missed the point that broken people can also be dangerous.[4]

I think I was basically modeling the question of "is it good if Sam experiences great success?" as being a question of his ultimate ends, and thus turning on whether he was secretly evil (or suchlike). And I wasn't persuaded that he was secretly evil.

But that very breakdown considers only how Good things would be if Sam got to choose the ends by wishing on a genie, without taking into account the (real!) risk of shitty ends caused by unethical means!

My error here perhaps rhymes with the gloss “the sociopaths in power might as well be our sociopaths". But as far as I recall, I didn't explicitly make (and wouldn't have endorsed) any argument of the form "Sam is unlikely to cause massive collateral damage in his pursuit of wealth and power"; I was simply failing to notice that the answer to the given question depended on how much harm we should expect Sam to do along the way. (Oops. It feels obvious in retrospect. Sorry.)

(Extra context: if I recall correctly, I was not, at the beginning of that conversation in 2020, aware that Sam was wildly wealthy. I assumed that Alameda had died in 2018, and actually kinda thought we were discussing water under the bridge plus separate edgy thought experiments about whether the CEV of a self-professed Good-aligned ~sociopath is better or worse in expectation than the status quo (which question notably does not weigh harms that people would commit in their own recklessness). And I also didn’t reexamine the conversation at all upon learning that Sam had become a decabillionaire. I can be kinda clueless sometimes. Oops.)[5]

 

I pit my evidence against itself

I (foolishly) misunderstood my friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing good ends, and wound up uncertain.

This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to reveal the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill” that didn’t fit any of it.

I internally (implicitly) saw "strong evidence on both sides", and shrugged, and marked myself down as uncertain. But in real life, there's never strong evidence on both sides of a question about how the world is.

Falsehoods don't have strong evidence in favor of them, that happens to be barely outweighed by even stronger evidence for the truth! All the evidence points towards a single reality!

Example: If you have 15 bits of evidence that Mars is in the east, and 14 bits of evidence that Mars is in the west, you shouldn't be like, "Hmm, so that's one net bit of evidence for it being in the east" and call it a day. You should be like, "I wonder if Mars moves around?”

“Or if there are multiple Marses?”

“Or if I’m moving around without knowing it?"

Failing that, at least notice that you’re confused and that you don’t have a single coherent model that accounts for all the evidence.

I was supposed to notice the tension, and seek some way that our apparently-contradictory evidence was not in fact in conflict.

Had I sought out a way to resolve the tension, I might have noticed that my friends were arguing not "Sam is pursuing Evil (despite your evidence to the contrary)" but rather "Sam is the sort of creature who does harm even in his pursuit of Good (and him succeeding is dangerous on those grounds)".

But I wasn't thinking about it clearly or carefully. I was just tossing various observations on different sides of an improperly-unified scale, and watching where it balanced.

And so when Sam did make a zillion dollars and start visibly putting it towards pandemic prevention and nuclear war prevention and etc. etc., that (subconsciously and implicitly) felt to me like it pulled down one side of the scales, and raised the other.

If I'd been thinking properly, I would have noticed that his shocking wealth was not in contradiction with the evidence on the other side of the scale, and was in fact easy to square with the hypothesis that his methods have been amoral. I might have even managed to explicitly form the hypothesis that his gains were ill-gotten.

But I wasn't thinking properly about the matter (or much at all, to be honest), and all the visible evidence of Goodness felt (at a glance) like it canceled out the competing evidence of amorality. A foolish mistake.

I pride myself in my ability to tease apart subtle tensions, and to avoid pitting the evidence against itself, in my areas of expertise. Clearly I have some work to do, to apply these skills more consistently or more broadly.

 

I think this was pretty cool of me.

Even though I was (foolishly) skeptical of (what I thought was) my friend’s “Sam is ill-intentioned” hypothesis, I nevertheless noticed that the stories my friend recounted sounded like Sam had crossed a line, and I encouraged them to speak up about it, and offered to back them up.

(My memories suggest that I offered to speak up about it myself at their behest, and take what flak I could, if they wanted me to, although that memory is significantly less clear and could easily be rose-tinted hindsight.)[6]

 

I failed to prod others to action

I did basically nothing in response to learning that my friend's concerns had theretofore fallen on deaf ears.

A cooler version of me would have taken that as more of a red flag, and made a list of deaf-eared people to pester, and then pestered them.

I didn't, and I regret that.

 

I failed to notice discomforts

When people make big and persistent mistakes, the usual cause (in my experience) is not something that comes labeled with giant mental “THIS IS A MISTAKE” warning signs when you reflect on it.

Instead, tracing mistakes back to their upstream causes, I think that the cause tends to look like a tiny note of discord that got repeatedly ignored—nothing that mentally feels important or action-relevant, just a nagging feeling that pops up sometimes.

To do better, then, I want to take stock of those subtler upstream causes, and think about the flinch reactions I exhibited on the five-second level and whether I should have responded to them differently.

Looking at the sort of things I said to friends and family in 2021, I was clearly aware that Sam is the sort of person who readily skirts regulations.

I wish I lived in a world where this was a damning condemnation, but alas, my current model is that regulations are often unduly stifling and generally harmful.[7]

I’d mentally binned various KYC-ish cryptocurrency regulations in the “well-intentioned but poorly-implemented” category, and did not in the slightest suspect FTX of mixing funds with customer assets. (I didn’t even yet have separate ‘FTX’ and ‘Alameda’ concepts; I just wasn’t paying that much attention.)

Looking back, I think that I remember experiencing little mental flinches when I referred to Sam (in passing) as a “friend” (although my brain might be exaggerating the memories in a self-serving / hindsight-biased way), on account of having unresolved grievances of the form “I’ve heard you were pretty shitty to people I care about”.

To be clear, though, the flinches were not of the form “maybe he’s stealing from clients”—I don’t recall the thought even occurring to me that Sam might be committing financial fraud or doing anything similarly bad. They were of the form “he seems to have hurt people I care about”.

And, for the avoidance of doubt, if not for hearing that he’d hurt my friends, I’d’ve unflinchingly called him “friend”—we shared a community, we’d had a few long involved philosophical arguments, we’d stayed up late talking at his house; that’s enough for me.[8]

I also recall similar little flinches at (e.g.) the group house in D.C., or on Nov 5-7 (when I struggled again for words for my relationship to Sam, and settled—if I recall correctly—on “not-very-close friend”, with some caveats about how I’d heard tell of shady behavior).

Reflecting a bit further, I think that the things I told people about Sam were colored somewhat by the tone of their inquiries. Once Sam started getting press for his donations, the tenor of some friend/family inquiries became more skeptical, and my responses changed to match: people would ask me questions like “is this Sam guy legit?”, and I would mentally substitute questions like “are these EA charities he’s donating to legitimate, and are they actually getting the money?”, which I felt much more readily able to answer. (Whereas in early 2021, I was more likely to mention the Hong Kong rumors, or the bad blood with my friends.)

But even then, I recall flashes of unease.

If I'd noticed them explicitly, perhaps I could have traced them back to the source. Perhaps that would have been the catalyst needed for me to stop pitting my "he hurt my friends" evidence against my "he's trying to do a lot of good" evidence. And if I'd found the (pretty basic!) way to reconcile all the evidence simultaneously, it might have led me straight to the truth.

 

Lessons

Pry more

I think there’s a way to pry into friends’ bad social experiences compassionately, fueled by genuine curiosity for juicy gossip and also by genuine compassion, that makes it easier to support one’s friends.

Had I done more of this in the case of Sam and Alameda, I might have had more puzzle-pieces to work with, and I plan to do more prying into my friends' concerns going forward.

(This is a lesson that I’ve already been taught once before, by some bad actors in the rationality community. That said, I wasn’t taught that particular lesson until mid 2018, so by my own accounting, I get only one “learns slowly” strike from the late 2020 conversation.)

I also think that I should think of non-disparagement agreements as pertaining to public knowledge, not to knowledge shared in confidence between friends, and that I shouldn’t let their existence dissuade me from inquiring further.

I think I’m basically already better at this, having written this all out explicitly.

 

Don’t pit your evidence against itself

Fixing the “I pit my evidence against itself” problem is easy enough once I’ve recognized that I’m doing this (or so my visualizer suggests); the tricky part is recognizing that I’m doing it.

One obvious exercise for me to do here is to mull on the difference between uncertainty that feels like it comes from lack of knowledge, and uncertainty that feels like it comes from tension/conflict in the evidence. I think there’s a subjective difference, that I just missed in this case, and that I can perhaps become much better at detecting, in the wake of this harsh lesson.

The other obvious way to notice when I’m doing this more readily is to get better at noticing my own unease in general.

 

Notice more unease

This is a key rationalist skill that, in my experience so far, I’ve always had more room to improve on.

I think my main action-item here is to mull on the particular unease that I felt at various times, and attend to their connection to recent painful events, on the model that this helps hone my unease-detectors in general.

 

Have more backbone

A thing I feel particularly bad about is not confronting Sam at any point about the ways he hurt people I care about.

I can come up with a variety of excuses: I basically only saw him in group contexts; once he apparently had tens of billions of dollars and was frantically running around trying to grow his wealth and put it towards good causes, it felt like a dick move to bring up years-old second-hand injuries out of the blue; I’d never heard his side of the story.

But also, failing to have the intent to grill him about it seems to have eroded my memory, and taken the edges off of my anger. I can still remember the sense of “yep, that crosses a line, fuck him, I’ll back you up if you need support” in that late 2020 conversation, and when I hold that memory fresh in my mind, I’m embarrassed by my conduct of being broadly cordial at a group house in 2022.

There’s a virtue, I think, to hearing about something that was Not Cool, and then... being unwilling to let it slide. I don’t think that, on my ideal ethics, I’d be required to confront Sam; avoiding him might also be permitted. And, on my ideal ethics, I’d definitely entertain the hypothesis that things looked very different from his own point of view. But I do think that, according to my ethics, I’m not supposed to hear concerns and then just slip silently back into generic cordiality.

(A part of me does protest, here, that this is particularly tricky in cases where the information is shared in confidence, in which case I don’t necessarily have license to confront Sam directly without violating that confidence. But, like, that’s not an excuse to slip silently into generic cordiality; it’s an excuse to notice that I’m chafing under confidentiality and then try to work out some other solution. Which might have, e.g., driven me to prod others to action.)

I think it’s plausible that I’ve gotten better at this simply by noticing the error, writing all this out, and reflecting on the embarrassment.

 

On blame

In writing this post, I worry that my words will be seen as giving social license to EAs to self-flagellate. So it seems important to reiterate that I’m against self-flagellation.

Having an overly pessimistic model of yourself is no more virtuous than having an overly optimistic model. Nor is it virtuous to exaggerate your faults to others. I’m not here trying to take a bunch of other people’s blame onto myself.

I'm disappointed with myself for not reflexively fitting all my evidence together into a single whole, and for failing to explicitly notice my unease multiple times. These are places where I strive to excel in general, and I intend to do better next time. But if your takeaway is that there’s a bunch of blame at my feet—

—well, actually, that’s fine, I don’t really care. But I do care if you adopt that stance toward the friends of mine who criticized Sam. I feel preemptively protective of my friends here, against an imagined internet mob who will proceed from here to allege that they didn't do their part.

As far as I’m concerned, my friends who tried to warn people about Sam already well more than paid their dues, and the correct community response is "oops, we should have listened better" and not "why didn't you shout louder?"[9]

I’ll also caveat that I’m not trying to place any blame at the feet of others who turned an apparently-deaf ear, given my current knowledge state. I don’t know what Sam’s side of the story sounded like, or whether there was some coordination failure where nobody felt like they were the one who could do something about it, or what. I am familiar with how clues that look obvious in retrospect can be difficult to assemble in advance, and with the phenomenon where diffusion of responsibility makes it hard for a community to do anything in response to warnings.

My impression, looking both within my own communities and at the broader world (that contains various other issues that lurk unseen for ages before being declared obvious in hindsight), is that this stuff is tricky to properly notice and address in advance.

As for myself, and the degree to which I personally turned a deaf ear—well, you already have my accounting, above. I think I definitely could have done better. I had lots of the puzzle-pieces, and if I’d been thinking better, I could have put them together and avoided a bunch of surprise.

I might even have been able to catalyze a public account of a bunch of the sketchy behavior at early Alameda, which might maybe have caused the EA community to keep a bit more distance from Sam, which would plausibly have caused him to have a lower reputation, and somewhat fewer victims. Which would have been great!

But, also, that’s not quite what this document is about. The use of this sort of document, to me, is that I can improve my ability to think for next time. (And the use I’m imagining for the community is that I’m hoping to lead by example, when it comes to giving honest and relatively candid accounts that we can possibly learn from. Or something, I dunno, ask Rob Bensinger, he’s the one who exhumed this from my drafts.)

I’d be mining the situation for self-improvements even if Omega themself guaranteed that none of my actions could have averted any of the harm Sam did. I’m here taking harsh lessons and seeing what I can learn about how to think better and be cooler; I’m not here to weave a tale about how the outcome secretly depended deeply on my own actions.[10]

It’s not virtuous to pretend that outcomes depend on your efforts to a greater degree than they actually do.
 

A parting note

Oh, right, one more piece of accounting that I almost forgot:

Clearly,[11] my real opportunity to avert this whole catastrophe was to be more persuasive in that first CDT vs. LDT conversation. It seems likely to me that Sam had some deficit in modeling the consequences of being shady and untrustworthy in multi-agent decision problems, and if that deficit had been repaired back in ~2015, perhaps this whole mess could have been avoided. Mea culpa.


 

  1. ^

    I’m also not trying to document MIRI’s financial interactions with Sam, Alameda, FTX, or the FTX Foundation. Rob Bensinger collected that information here.

  2. ^

    I did not come away with the belief that Sam was defrauding his clients. I’m not aware of any fraud or theft having been part of the 2018-Alameda story. But I still don’t know all the details, and (e.g.) the details Naia Bouscal and others shared in November about Alameda’s early history go beyond what I recall learning.

  3. ^

    A notable absence in this list is that I did not pay much attention to Sam, or FTX, or Alameda. That definitely contributed to my failure to notice that bad stuff was happening, but I stand by the decision given what I knew at the time, because I have other stuff to do.

  4. ^

    The FTX debacle and other revelations have updated me a little toward "Sam's ultimate goals may not have been altruistic at all", but this is a pretty small update. Mostly my guess is that Sam's bad behavior came from issues orthogonal to his ultimate goals, or was even exacerbated by his altruism. (E.g., a low-integrity person with altruistic ends may find it easier to rationalize bad behavior because the stakes are so high, or because the feeling of moral purity leaks out and contaminates everything you do with a sense of Virtuousness.)

    Reflecting on this post as a whole, I have an overall concern that it isn't compassionate enough towards Sam. I worry about the social incentives to only speak up about this topic if you're willing to flatten your models into caricatures and strategically empathize with all and only the people who it's strategically savvy to empathize with.

    My best guess is still that Sam had a bunch of good intent, and tried to do lots of good, and really was serious about putting money towards good.

    I separately think there's a pretty solid chance that he was (reckless and negligent and foolish and) not noticing that it wasn't his money he was giving away; that he really did think that it was his own hard-earned money. Though I also entertain the hypothesis that he knew exactly what he was doing; and regardless, he’s at fault for the resultant destruction.

    I'm angry that (in effect, and by all appearances) an enormous number of people had their money stolen from them, after trusting Sam and FTX to do right by them. This has caused a great deal of hardship for a huge number of people, and through his (apparent) actions, Sam has in my eyes moved himself to the back of the compassion-line: any efforts we extend to help people should go to the victims first, long before they go to the perpetrators.

    I don't really know how to walk the line between "you dicked over my friends", and “you hurt and betrayed huge numbers of innocent people”, and "I nonetheless feel compassion for you as a fellow human being", and “I’ve enjoyed hanging out on occasion”, and "I don't in fact, in real life, think that you're a one-dimensional villain with no rare praiseworthy qualities", and “I deeply respect people dedicating their resources towards addressing deep and real issues that they see around them”, and “... but those were not your resources”, and an overarching “you were a deluded reckless harmful fool”.

    So for lack of knowing how to walk that line, I can at least comment on the problem in this footnote.

  5. ^

    To be clear, I don't think it's my friends' fault that I got little info in these early conversations. I was not acting curious, because (as I mentioned earlier) I was under the impression that they were bound by various privacy agreements and I felt it would be antisocial to pry.

  6. ^

    Evidence for the theory that my recollections are rose-tinted: in an earlier draft of this post, I asked my friend whether I had in fact encouraged them to speak up and offered backup, and they answered something to the effect of: yes, but also in the same conversation you argued that Sam having lots of power was probably good for the world, which undermined the message. I’d completely forgotten about that bit, before they jogged my memory.

    More evidence for the rose-tinting theory: before I jogged my memory, I had a vague recollection that I had started out the conversation not knowing that Sam was, at that time, rich and powerful, and then learned that this was the case, and doubled down. But, after more recollection, I now am pretty confident that I learned about Sam’s billionaire status a few months later, when a local country-music legend played a rendition of his song “My Girlfriend Left Me For A Billionaire” at a COVID-safe gathering. My clear memory of surprise from the song casts the fuzzy/distant memory of doubling-down into deep suspicion.

  7. ^

    For example, I think ridesharing apps have created a large amount of social value, despite how—if I understand correctly—they were technically illegal in various places when they started out. And, for another example, I would prefer that websites stop showing me the “accept cookies” prompt and just use cookies, regardless of how illegal that is.

  8. ^

    Another factor that I recall weighing in the split-section word-choice: Once somebody has wealth and power, I’m more hesitant to use the label “friend”, for fear of exaggerating the strength of a relationship that it would be cool to have.

    And another factor: I’d never heard Sam’s side of the early-Alameda blowup story, and felt weird about passing strong judgment before hearing it.

    Ultimately, the choice was probably decided by the fact that English doesn’t have a great word for our relationship—"acquaintance" is three syllables and isn’t quite right, “cocommunity-member” is closer but it’s just way too long.

  9. ^

    Recall that my friends who worked at Alameda until the 2018 breakup didn’t commit egregious financial fraud. FTX’s behaviors were bad, as were the behaviors of Alameda after the exodus, but if we aren’t endorsing the Copenhagen Interpretation of Ethics, being in the blast radius of other people’s bad behavior does not make you evil too.

    (Though if it were just me at risk of being Copenhagened, I’d cut most of this section and not worry about it. If I’m exposing other people to risk of being Copenhagened by writing a blog post that touches on something they did, then I feel more of a responsibility to add in disclaimers like this.)

    I think it genuinely would have been really cool if some people in the Alameda blast zone had loudly publicly aired concerns in advance, despite how early attempts to talk about it apparently fell on deaf ears. But doing so also would have involved going significantly above and beyond the call of duty. The general policy of demanding everyone routinely take on that much personal cost is asking far too much of individuals—doubly so given that nobody I talked to knew about Sam’s apparent financial crimes (as far as I can tell), as opposed to more general shadiness.

    If your takeaway from this is “the people in the blast radius should have spoken up louder”, rather than “how can the community improve its mechanisms for incentivizing and aggregating this sort of knowledge”, then I think you’re taking quite the wrong message (while worsening the incentives, to boot).

  10. ^

    That’s why this doc focuses on my own errors—like acting more cordial than I endorse given what I knew–rather than on bigger issues like mitigating harm Sam did. Harm mitigation is important stuff, and there’s a place for it, but this document isn’t that place.

  11. ^

    This word is intended to be read with an intonation signifying that the following text is mostly joking (albeit perhaps with a small grain of truth).

Comments23
Sorted by Click to highlight new comments since:

I appreciate this post a lot, particularly how you did not take more responsibility than was merited and how you admitted thinking it wasn't a red flag that SBF skirted regulations bc the regulations were probably bad. I appreciated how you noticed hindsight bias and rewritten history creeping in, and I appreciate how you don't claim that more ideal actions from you would have changed the course of history but nonetheless care about your small failures here.

Do you think EA's self-reflection about this is at all productive, considering most people had even less information than you? My (very, very emotional) reaction to this has been that most of the angst about how we somehow should have known or had a different moral philosophy (or decision theory) is a delusional attempt to feel in control. I'm just curious to hear in your words if you think there's any value to the reaction of the broader community (people who knew as much or less about SBF before 11/22 than you).

Do you think EA's self-reflection about this is at all productive, considering most people had even less information than you?

I don't have terribly organized thoughts about this. (And I am still not paying all that much attention—I have much more patience for picking apart my own reasoning processes looking for ways to improve them, than I have for reading other people's raw takes :-p)

But here's some unorganized and half-baked notes:


I appreciated various expressions of emotion. Especially when they came labeled as such.

I think there was also a bunch of other stuff going on in the undertones that I don't have a good handle on yet, and that I'm not sure about my take on. Stuff like... various people implicitly shopping around proposals about how to readjust various EA-internal political forces, in light of the turmoil? But that's not a great handle for it, and I'm not terribly articulate about it.


There's a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say "I'm such a fool; I should have bet 23".

More useful would be to say "I'm such a fool; I should have noticed  that the EV of this gamble is negative." Now at least you aren't asking for magic lottery powers.

Even more useful would be to say "I'm such a fool; I had three chances to notice that this bet was bad: when my partner was trying to explain EV to me; when I snuck out of the house and ignored a sense of guilt; and when I suppressed a qualm right before placing the bet. I should have paid attention in at least one of those cases and internalized the arguments about negative EV, before gambling my money." Now at least you aren't asking for magic cognitive powers.

My impression is that various EAs respond to crises in a manner that kinda rhymes with saying "I wish I had bet 23", or at best "I wish I had noticed this bet was negative EV", and in particular does not rhyme with saying "my second-to-last chance to do better (as far as I currently recall) was the moment that I suppressed the guilt from sneaking out of the house".

(I think this is also true of the general population, to be clear. Perhaps even moreso.)

I have a vague impression that various EAs perform self-flagellation, while making no visible attempt to trace down where, in their own mind, they made a misstep. (Not where they made a good step that turned out in this instance to have a bitter consequence, but where they made a wrong step of the general variety that they could realistically avoid in the future.)

(Though I haven't gone digging up examples, and in lieu of examples, for all I know this impression is twisted by influence from the zeitgeist.)


My guess is that most EAs didn't make mental missteps of any import.

And, of course, most folk on this forum aren't rushing to self-flagellate. Lots of people who didn't make any mistake, aren't saying anything about their non-mistakes, as seems entirely reasonable.


I think the scrupulous might be quick to object that, like, they had some flicker of unease about EA being over-invested in crypto, that they should have expounded upon. And so surely they, too, erred.

And, sure, they'd've gotten more coolness points if they'd joined the ranks of people who aired that concern in advance.

And there is, I think, a healthy chain of thought from there to the hypothesis that the community needs better mechanisms for incentivizing and aggregating distributed knowledge.

(For instance: some people did air that particular concern in advance, and it didn't do much. There's perhaps something to be said for the power that a thousand voices would have had when ten didn't suffice, but an easier fix than finding 990 voices is probably finding some other way to successfully heed the 10, which requires distinguishing them from the background noise--and distinguishing them as something actionable--before it's too late, and then routing the requisite action to the people who can do something about it. etc.)

I hope that some version of this conversation is happening somewhere, and it seems vaguely plausible that there's a variant happening behind closed doors at CEA or something.

I think that maybe a healthier form of community reflection would have gotten to a public and collaborative version of that discussion by now. Maybe we'll still get there.

(I caveat, though, that it seems to me that many good things die from the weight of the policies they adopt in attempts to win the last war, with a particularly egregious example that springs to mind being the TSA. But that's getting too much into the object-level weeds.)

(I also caveat that I in fact know a pair of modestly-high-net-worth EA friends who agreed, years ago, that the community was overexposed to crypto, and that at most one of them should be exposed to crypto. The timing of this thought is such that the one who took the non-crypto fork is now significantly less comparatively wealthy. This stuff is hard to get right in real life.)

(And I also caveat that I'm not advocating design-by-community-committee when it comes to community coordination mechanisms. I think that design-by-committee often fails. I also think there's all sorts of reasons why public attempts to discuss such things can go off the rails. Trying to have smaller conversations, or in-person conversations, seems eminently reasonable to me.)


I think that another thing that's been going on is that there are various rumors around that "EA leaders" knew something about all this in advance, and this has caused a variety of people to feel (justly) perturbed and uneasy.

Insofar as someone's thinking is influenced by a person with status in their community, I think it's fair to ask what they knew and when, as is relevant to the question of whether and how to trust them in the future.

And insofar as other people are operating the de-facto community coordination mechanisms, I think it's also fair to ask what they knew and when, as is relevant to the question of how (as a community) to fix or change or add or replace some coordination mechanisms.


I don't particularly have a sense that the public EA discourse around FTX stuff was headed in a healthy and productive direction.

It's plausible to me that there are healthy and productive processes going on behind closed doors, among the people who operate the de-facto community coordination mechanisms.

Separately, it kinda feels to me like there's this weird veil draped over everything, where there's rumors that EA-leader-ish folk knew some stuff but nobody in that reference class is just, like, coming clean.

This post is, in part, an attempt to just pierce the damn veil (at least insofar as I personally can, as somebody who's at least EA-leader-adjacent).

I can at least show some degree to which the rumors were true (I run an EA org, and Alameda did start out in the offices downstairs from ours, and I was privy to a bunch more data than others) versus false (I know of no suspicion that Sam was defrauding customers, nor have I heard any hint of any coverup).

One hope I have is that this will spark some sort of productive conversation.

For instance, my current hypothesis is that we'd do well to look for better community mechanisms for aggregating hints and acting on them. (Where I'm having trouble visualizing ways of doing it that don't also get totally blindsided by the next crisis, when it turns out that the next war is not exactly the same as the last one. But this, again, is getting more into the object-level.)

Regardless of whether that theory is right, it's at least easier to discuss in light of a bunch of the raw facts. Whether or not everybody was completely blindsided, vs whether we had a bunch of hints that we failed to assemble, vs whether there was a fraudulent conspiracy we tried to cover up, matters quite a bit as to how we should react!

It's plausible to me that a big part of the reason why the discussion hasn't yet produced Nate!legible fruit, is because it just wasn't working with all that many details. This post is intended in part to be a contribution towards that end.

(Though I of course also entertain the hypotheses that there's all sorts of different forces pushing the conversation off the rails (such that this post won't help much), and the hypothesis that the conversation is happening just fine behind closed doors somewhere (such that this post isn't all that necessary).)

(And I note, again, that insofar as this post does help the convo, Rob Bensinger gets a share of the credit. I was happy to shelve this post indefinitely, and wouldn't have dug it out of my drafts folder if he hadn't argued that it had a chance of rerailing the conversation.)

Fwiw, for common knowledge (though I don't know everything happening at CEA), so that other people can pick up the slack and not assume things are covered, or so that people can push me to change my prioritization, here's what I see happening at CEA in regard to:

"finding some other way to successfully heed the 10, which requires distinguishing them from the background noise--and distinguishing them as something actionable--before it's too late, and then routing the requisite action to the people who can do something about it"

  • I've been thinking some about it, mostly in the context of thinking that every time something shifts a lot in power or funding, that should potentially be an active trigger for us as a team investigating / figuring out if anything's suss. We're not often going to be the relevant subject matter experts, but we can find others and ask a bunch of EAs what they personally know if they're comfortable speaking.
    • It's also been more salient to me since reading your comment!
  • Maybe stronger due diligence than normal financial checks on major EA donors shouldn't actually be my team's responsibility, in which case we should figure out whose it is.
  • The community health team as a whole is doing thinking about it, especially via the mechanism "how do we gather more of people's vague fuzzy concerns that wouldn't normally rise to the level of calling out / how do we make it easier to talk to us", but also at some point planning to do a reflection on what we missed / didn't make happen that we wish we'd did given our particular remit
  • Nicole Ross, the normal manager of the team, who's been doing board work for months, has been thinking a lot about what should change generally in EA and plans to make that reflection and orienting a top priority as she comes back.
  • The org as a whole is definitely thinking about "what are the root causes that made this kind of failure happen and what do we do about that", and one of my colleagues says they're thinking about the particular mechanism you point to, but conversations I've been a part of have not emphasized it.
  • There's a plan to think about structural and governance reform, which I would strongly assume would engage with the question of better/alternate whistleblowing structures as well as other related things, and only end up not suggest them if it seemed bad or other things were higher priority.

If my colleagues disagree, please say so! I think overall it's correct to say this particular thread isn't a top priority of any person or team right now. Perhaps it should be! But there are lots of threads, and I think this one is important but less urgent. I'd like to spend some time on it at some point, though. Happy to get on a call and chat about it.

FWIW, I would totally want to openly  do a postmortem. once the bankruptcy case is over, i'll be pretty happy to publicly say what i knew at various points of time. but i'm currently holding back for legal reasons, and instead discuss it (as you said) "behind closed doors". (Which is frustrating for everyone who would like to have transparent public discussion, sorry about that. it' is also really frustrating for me!)

I think the truth is closest to "we had a bunch of hints that we failed to assemble"


 

FWIW, I think such a postmortem should start w/ the manner in which Sam left JS. As far as I'm aware, that was the first sign of any sketchiness, several months before the 2018 Alameda walkout.

Some characteristics apparent at the time:

  • joining CEA as "director of development" which looks like it was a ruse to avoid JS learning about true intentions
  • hiring away young traders who were in JS's pipeline at the time

I believe these were perfectly legal, but to me they look like the first signs that SBF was inclined to:

  • choose the (naive) utilitarian path over the virtuous one
  • risk destroying a common resource (good will / symbiotic relationship between JS and EA) for the sake of a potential prize

These were also the first opportunities I'm aware of that the rest of us had to push back and draw a harder line in favor of virtuous / common-sense ethical behavior.

If we want to analyze what we as a community did wrong, this to me looks like the first place to start.

As a psych professor, I found this to be a real tour-de-force of self-analysis, analyzing the many ways in which the strengths, weaknesses, and biases of human social cognition, person perception, memory, and rationalization can play into our interactions and judgments of people.  

I've rarely seen rationalist epistemic humility applied so insightfully to one's own social experiences and impressions. Bravo.

I really really like this kind of accounting of people's thinking and mistakes; I think it's a kind of "cognitive apprenticeship" that makes what is usually invisible (the inside of people's heads) visible + does some great modelling of owning mistakes. It also has a great "speaker for the dead" quality,  just telling things as they are (at least I hope! That's definitely the vibe). 

I'm really interested in improving "mechanisms for incentivizing and aggregating this sort of knowledge", and have some thoughts in that direction; if people have more, I would like to hear them. 

(A lot of the time lately when I write a comment this nice right after reading something (maybe 2 of the last 3? 2of the last 4 or 5 if you use different definitions, I wish I'd more tempered later, so I'll come back and edit if that's the case).

So for lack of knowing how to walk that line, I can at least comment on the problem in this footnote.

Very important/good footnote imo.

This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to try to form a unified picture of the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill”.

There's a thing here which didn't make its way into Lessons, perhaps because it's not a lesson that Nate in particular needed, or perhaps because it's basically lumped into "don't pit your evidence against itself."

But, stating it more clearly for others:

There is a very common and very bad mistake that both individuals and groups tend to make a lot in my experience, whereby they compress (e.g.) "a 60% chance of total guilt and a 40% chance of total innocence" into something like "a 100% chance that the guy is 60% guilty, i.e. kinda sketchy/scummy."

I think something like DO NOT DO THIS or at the very least NOTICE THIS PATTERN maybe is important enough to be a Lesson for the median person here, although plausibly this is not among the important takeaways for Nate.

Isn't that the opposite of what Nate said?

Nate says that he had some evidence that Sam was good in some way (good intent) and some evidence that Sam was bad in some ways (bad means). The correct conclusion in this case (probably?) is that Sam was part good part bad.  But Nate mistakenly though of this as some chance Sam is totally good and some chance totally bad. 

I'm not saying that what you (Duncan) points to is a real mistake that some people does. But I don't see it is this case. 

Note that I specifically wanted to hit the failure mode where there is, in reality, a clear-cut binary (e.g. totally innocent or totally guilty).

But yeah, correct that this is not what was going on with SBF or Nate's assessments. More of a "this made me think of that," I guess.

I think that "this made me think of that" is a valid reason for a comment. 

I'm currently not sur if my comment on your comment is stupid nit picking or relevant clarification. 

I think Nate is saying that the question was actually "will he cause harm on his way to doing good" or "will he do only good" and the correct conclusion was in fact "he will cause harm in his way to doing good" and that this was a binary fact about the universe.

But tangled in that is that at the time he thought that the question was "is he good" vs "is he bad" or something. And on this he did the false average and shrug thing. So Duncan's answer is quite relevant imo.

5. Alameda Research had changed its name to FTX.

I basically heard the same thing in late 2021, and I am upset that I stayed silent about it even though it really alarmed me at the time. 

Everywhere on the public internet, Alameda Research and FTX had painted themselves as clearly different companies. Since October 2021, they've ostensibly had disjoint sets of CEOs. By late 2021 I had watched several interviews with SBF and followed his output closely on Twitter, and saw people talking about Alameda and FTX in several crypto Discord servers. Nowhere did anyone say that Alameda had changed its name to FTX or otherwise act as if they were the same company (though everyone knew they were close).

And yet, one day in late 2021, I saw an EA  who had worked at Alameda after FTX was founded casually conflating the two companies. They said something like, "Alameda, the company that is now called FTX, [rest of sentence]." They seemed to think that there was nothing noteworthy about what they'd just said, but it really shocked me. How could they know less about FTX and Alameda than me, who had never worked at either company and was just watching everything by the sidelines?  If it was possible for this person to think that FTX was merely the new name for Alameda, that almost certainly implied that the FTX/Alameda leadership was putting a lot of effort into consistently lying to the public. 

That brief sentence kept coming back to my mind, and it really made me uncomfortable. But yeah, I stayed completely silent, other than bringing that up a few times to my partner. I'm low-status, so I would likely not have achieved anything by speaking up, but perhaps I should have done so anyway.

I'm extremely opposed to the culture of silence in EA/rat spaces. It is very extreme.

This has the ring of deeply reflective and effortful honesty to me. It feels like a rare level of seeing what is there to be seen, undiverted by whatever conclusions may have been written in your subconscious before you started, and with appropriate caution given toward the traps of fallible memory and motivated narrativizing along the way. I also appreciated seeing your process of updating heuristics with the garage door up. "Don't pit your evidence against itself" and the ignorance uncertainty vs tension uncertainty distinction feel particularly like things I want to reflect on.

Thanks for showing your part of the map so clearly.

Thanks for posting this. I think giving detailed reflections and "lessons learned" like this can be really helpful in these sorts of situations, but I also recognize it can be tough to do in public. Positive reinforcement for this openness and frank discussion!

[Full disclosure: I had two small grants (EDIT: regrants) from FTXFF, and did contract work for several orgs FTX definitely or plausibly funded]

 

A few times in the year before FTX broke, a friend (waiting on thumbs up to use their name) asked what I thought about funding  altruistic work with crypto, was crypto maybe inherently bad, did this give us incentives to overlook bad things,...? FTX was obviously the largest part of this but he also questioned the low level crypto  millionaires. Both times I remember, I countered with "is crypto worse than facebook?" 

That was a sincere question, and I'm still not sure the answer for the field as a whole is "yes".  But the follow up conversation focused on my slightly edgey claim in a way that was obvious we weren't going to act on (refuse crypto and openphil money? preposterous!), and away from the hard questions whose answers might change my actions. I wasn't consciously trying to deflect, but if I had been I'm not sure I could have picked a better method. 

There was information that was knowable in early 2022 but was not known to me (e.g. the early exodus from Alameda and the resulting bad blood, or FTX's aggressive recruitment of naive users), and I think if I'd known about it I would have reacted differently on the margin. 

When people make big and persistent mistakes, the usual cause (in my experience) is not something that comes labeled with giant mental “THIS IS A MISTAKE” warning signs when you reflect on it.

Instead, tracing mistakes back to their upstream causes, I think that the cause tends to look like a tiny note of discord that got repeatedly ignored—nothing that mentally feels important or action-relevant, just a nagging feeling that pops up sometimes.

To do better, then, I want to take stock of those subtler upstream causes, and think about the flinch reactions I exhibited on the five-second level and whether I should have responded to them differently.

I don't see anything in the lessons on the question of whether or not your stance on drama has changed, which feels like the most important bit?

That is, suppose I have enough evidence to not-be-surprised-in-retrospect if one of my friends is abusing their partner, and also I have a deliberate stance of leaving other people's home lives alone. The former means that if I thought carefully about all of my friends, I would raise that hypothesis to attention; the latter means that even if I had the hypothesis, I would probably not do anything about it. In this hypothetical, I only become a force against abuse if I decide to become a meddler (which introduces other costs and considerations).

Good point! Currently, I think the "pry more" lesson is supposed to account for a bunch of this.

Since making this update, I have in fact pried more into friends' lives. In at least one instance I found some stuff that worried me, at which point I was naturally like "hey, this worries me; it pattern-matches to some bad situations I've seen; I feel wary and protective; I request an opportunity to share and/or put you in touch with people who've been through putatively-analogous situations (though I can also stfu if you're sick of hearing people's triggered takes about your life situation.)" And, as far as I can tell, that was a useful/helpful thing to have done in that situation (and didn't involve any changes to my drama policy).

That said, that situation wasn't one where the right move involved causing ripples in the community (e.g. by publicly airing concerns about what went down at Alameda). If we fight the last war again, my hope is that I'd be doing stuff more like "prod others to action", or perhaps "plainly state aloud what I think" (as I'm doing now with this post).

There is something about this post that feels very "neutral tone" to me, and that makes it feel at home within my "don't give drama the attention it needs to breathe" policy. I think my ability to have good effects by prying more  & prodding more & having more backbone doesn't require changes to my drama policy. (And perhaps the policy has shifted around somewhat, without me noticing it? Doesn't feel like it, though.)

I am of course open to arguments that I'm failing to learn a lesson about my drama policy.

"if there are going to be sociopaths in power, they might as well be EA sociopaths".

Just noting that my shoulder Nate is fairly calibrated/detailed thanks to over a decade of interaction with Nate, and is incapable of endorsing the sentence "if there are going to be sociopaths in power, they might as well be EA sociopaths." 

My shoulder Nate insists quite adamantly that this is Not A Good Thing To Boil Down To A Binary Yes-No, but that if someone's forcing him to say yes or no, then he will say no.  And before someone argues that picking no sounds dumb, [insert coherent explanation of why the frame is badwrong].

Fixing the “I pit my evidence against itself” problem is easy enough once I’ve recognized that I’m doing this (or so my visualizer suggests); the tricky part is recognizing that I’m doing it.

One obvious exercise for me to do here is to mull on the difference between uncertainty that feels like it comes from lack of knowledge, and uncertainty that feels like it comes from tension/conflict in the evidence. I think there’s a subjective difference, that I just missed in this case, and that I can perhaps become much better at detecting, in the wake of this harsh lesson.

Something that helps me with problems like this is to verbalise the hypotheses I'm weighing up. Observing them seems to help me notice gaps.

I relate to your write-up on a personal level, as I can easily see myself having the same behavioral preferences as well as modes of imperfection as you if I was in a similar situation.

And with that in mind, there's only one thing that I'm confused about:

A thing I feel particularly bad about is not confronting Sam at any point about the ways he hurt people I care about.

What would that confrontation have looked like? How would you have approached it, even taking into account hindsight wisdom but without being a time-travelling mind-reader?

In that confrontation, what would you be asking for from Sam? (E.g., explanation? reassurance?apology? listening to your concerns?)

More from So8res
Curated and popular this week
Relevant opportunities