EG

Evan_Gaensbauer

@ N/A
2342 karmaJoined Working (6-15 years)Pursuing other degree/diploma

Participation
3

  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Posts
75

Sorted by New

Sequences
3

Setting the Record Straight on Effective Altruism as a Paradigm
Effective Altruism, Religion and Spirituality
Wild Animal Welfare Literature Library

Comments
865

It was requested by an anonymous individual in a private message group among several others--some effective altruists, and some not--that this be submitted to the EA Forum, with the anonymous requester not wanting to submit the post themself. While that person could technically have submitted this post under an anonymous EA Forum user account, as a matter of personal policy they have other reasons they wouldn't want to submit the post regardless. As I was privy to that conversation, I volunteered to submit this post myself. 

Other than submitting the link post to Dr. Thorstad's post, the only other way I contributed was to provide the above summary on the post. I didn't check with David beforehand that he verified that summary as accurate, though I'm aware he's aware that these link posts are up and hasn't disputed the accuracy of my summary since. 

I also didn't mean to tag Scott Alexander above in the link post as a call-out. Having also talked to the author, David, beforehand, he informed me that Scott was already aware of that this post had been written and published.  Scott wouldn't have been aware beforehand, though, that I was submitting this as a link post after it had been published on Dr. Thorstad's blog, Reflective Altruism. I tagged Scott so he could receive a notification to be aware of this post largely about him whenever he might next log on to the EA Forum (and, also, LessWrong, respectively, where this link post was also cross-posted). As to why this post was downvoted, other than the obvious reasons, I suspect based on the link post itself or the summary I provided that:

  •  Those who'd otherwise be inclined to agree with David's criticism(s) presented might consider them to not be harsh enough, or to be avoided being discussed on the EA Forum so as not to bring further attention to the perceived association between EA and the subject matter in question, given that they'd prefer there be even less of an association between the two.
  • Those who'd want to avoid a post like this being present on the EA Forum, so as to not risk further association between EA and the subject matter in question, not based on earnest disagreement, but only based on optics/PR concerns.
  • Those who disagree with the characterization of the subject matter as "so-called" race science, given they may consider it to be as genuine a field/branch of science as any other of the life sciences or social sciences.
  • Those who disagree with the characterization of individuals referenced as "prominent thinkers" associated with the EA and/or rationality community, through either disagreeing with the idea those thinkers are significantly 'prominent' at all; or considering the association between those thinkers, and the EA or rationality communities, to be manufactured and exaggerated as part of past smear campaigns, and thus shouldn't be validated whatsoever.

I'd consider those all to be worse reasons to downvote this post, based on either reactive conclusions about either optics or semantics. Especially as to optics, to counter one Streisand effect with massive downvoting can be an over-correction causing another Streisand effect. I'm only making this clarifying comment today, when I didn't bother to do so before, only because I was reminded of it when I received a notification it has received multiple downvotes since yesterday. That may also be because others have been reminded of this post because David a few days ago made another post on the EA Forum, largely unrelated, and this link post was the last one most recently posted referring to any of David's criticisms of EA. Either way, with over 20 comments in the last several weeks, downvoting this post didn't obscure or bury it. While I doubt that was necessarily a significant motivation for most other EA Forum members who downvoted this post, it seems to me that anyone who downvoted mainly to ensure it didn't receive any attention was in error. If anyone has evidence to the contrary, I'd request you please present it, as I'd be happy to receive evidence I may be wrong about that. What I'd consider better reasons to downvote this post include:

  • The criticism in question may not do enough to distinguish that the vast majority of Scott's own readership, among the EA or rationality communities, seem to likely be opposed to the viewpoints criticized, regardless of the extent to which Scott holds them himself, in contradiction to the vocally persistent but much smaller minority of Scott's readership who would seem to hold the criticized views most strongly. That's the gist of David Mathers' comment here, the most upvoted one on this post. The points raised are ones I expect that it'd be appropriate for David Thorstad to acknowledge or address before he continues writing this series, or at least hopes for future posts like this to be well-received on the EA Forum. That could serve as a show of good faith to the EA community in recognizing a need to sensitively clarify or represent it as not as much of a monolith as his criticisms might lead some to conclude.
  • The concern that it was unethical for Dr. Thorstad to bring more attention to how Scott was previously doxxed, or his privately leaked emails. While I was informed by Dr. Thorstad that Scott was aware of details like that before the criticism was published, so might've objected privately if he was utterly opposed to those past controversies being publicly revisited, though that wouldn't have been known to any number of EA Forum or LessWrong users who saw or read the criticism for the first time through either of my link posts. (I took Dr. Thorstad at his word about how he'd interacted with Scott before the criticism was published, though I can't myself corroborate further at this time for those who'd want more evidence or proof of that fact. Only Dr. Thorstad and/or Scott may be able to do so.)
  • While I don't consider their inclusion in the criticism of some pieces of evidence for problems with some of Scott's previously expressed views to be without merit, how representative they are of Scott's true convictions is exaggerated. That includes Scott's Tumblr post from several years ago taken out of context and was clearly made mostly in jest, though Dr. Thorstad writes about it as though all that might be entirely be lost on him. I'm not aware of whether he was being obtuse or wasn't more diligent in checking the context, though either way it's an oversight that scarcely strengthens the case Dr. Thorstad made.
  • The astute reason pointed out in this comment as to how this post, regardless of how agreeable or not one may find its contents, is poorly presented by not focusing on the most critical cruxes of disagreement: 
     

The author spends no time discussing the object level, he just points at examples where Scott says things which are outside the Overton window, but he doesn't give factual counterarguments where what Scott says is supposed to be false.

I sympathize with this comment as one of the points of contention I have with Dr. Thorstad's article. While I of course sympathize with what the criticism is hinting at, I'd consider it better if it had been prioritized as the main focus of the article, not a subtext or tangent. 

Dr. Thorstad's post multiple times as 'unsavoury' the views expressed in the post, as though they're like an overcooked pizza. Bad optics for EA being politically inconvenient via association with pseudoscience, or even bigotry, are a significant concern. They're often underrated in EA. Yet PR concerns might as well be insignificant to me, compared to the possibility of excessive credulity among some effective altruists towards popular pseudo-intellectuals leading them to embracing dehumanizing beliefs about whole classes of people based on junk science. The latter belies what could be a dire blind spot among a non-trivial portion of effective altruists in a way that glaringly contradicts the principles of an effectiveness-based mindset or altruism. If that's not as much of a concern for criticisms like these as some concern about what some other, often poorly informed leftists on the internet believe about EA, the worth of these criticisms will be much lower than they could or should be. 

I've been mulling over submitting a response of my own to Dr. Thorstad's criticism of ACX, clarifying where I agree or disagree with its contents, or how they were presented. I appreciate and respect what Dr. Thorstad has generally been trying to do with his criticisms of EA (though I consider some of the series, other than the one in question about human biodiversity, to be more important), though I also believe that, at least in this case, he could've done better. Given that I could summarize my constructive criticism(s) to Dr. Thorstad as a follow-up to my previous correspondence with him, I may do that so as not to take up more of his time, given how very busy he seems to be. I wouldn't want to disrupt or delay to much the overall thrust of his effort, including his focus on other series that addressing concerns about these controversies might derail or distract him from. Much of what I would want to say in a post of my own I have now presented in this comment. If anyone else would be interested in reading a fuller response from me to this post last month that I linked, please let me know, as that'd help inform my decision of how much more effort I'd want to invest in this dialogue. 

This comment that I've cross-posted to LessWrong has quickly accrued negative karma. This comment is easy to misunderstand as I originally wrote it, so I understand the confusion. I'll explain here what I explained in an edit to my comment on LW, so as to avoid the confusion here on the EA Forum that I incurred there. 

I wrote this comment off the cuff, so I didn't put as much effort into writing it as clearly or succinctly as I could, or maybe should, have. So, I understand how it might read is as a long, meandering nitpick, of a few statements near the beginning of the podcast episode, without me having listened to the whole episode yet. Then, I call a bunch of ex-EAs naive idiots, like Elizabeth referred to herself as at least formerly being a naive idiot, and then say even future effective altruists will be proven to be idiots, and those still propagating EA after so long, like Scott Alexander, might be the most naive and idiotic of all. To be clear, I also included myself, so this reading would also imply that I'm calling myself a naive idiot.

That's not what I meant to say. I would downvote that comment too. I'm saying that

  1. If it's true what Elizabeth is saying about her being a naive idiot, then it would seem to follow that a lot of current, and former, effective altruists, including many rationalists, would also be naive idiots for similar reasons.
  2. If that were the case, then it'd be consistent with greater truth-seeking, and criticizing others for not putting enough effort into truth-seeking with integrity with regards to EA, to point out to those hundreds of other people that they either, at one point were, or maybe still are, naive idiots.
  3. If Elizabeth or whoever wouldn't do that, not only because they consider it mean, but moreover because they wouldn't think it true, then they should apply the same standards to themselves, and reconsider that they were not, in fact, just naive idiots.
  4. I'm disputing the "naive idiocy" hypothesis here as spurious, as it comes down to the question of 
    whether someone like Tim--and, by extension, someone like me in the same position, who has also mulled over quitting EA--are still being naive idiots, on account of not having updated yet to the conclusion Elizabeth has already reached.
  5. That's important because it'd seem to be one of the major cruxes of whether someone like Tim, or me, would update and choose to quit EA entirely, which is the point of this dialogue, so if that's not a true crux of disagreement here, speculating about whether hundreds of current and former effective altruists have been naive idiots is a waste of time. 

Comment cross-posted on LessWrong

I've begun listening to this podcast episode. Only a few minutes in, I feel a need to clarify a point of contention over some of what Elizabeth said:

Yeah. I do want to say part of that is because I was a naive idiot and there's things I should never have taken at face value. But also I think if people are making excuses for a movement that I shouldn't have been that naive That's pretty bad for the movement.

She also mentioned that she considers herself to have caused harm by propagating EA. It seems like she might be being too hard on herself. While she might consider being that hard on herself to be appropriate, the problem could be what her conviction implies. There are clearly still some individual, long-time effective altruists she still respects, like Tim, even if she's done engaging with the EA community as a whole. If that wasn't true, I doubt this podcast would've been launched in the first place. Having been so heavily involved in the EA community for so long, and still being so involved in the rationality community, she may know hundreds of people, friends, who either still are effective altruists now, or used to be effective altruists, but no longer. Regarding the sort of harm caused by EA propagating itself as a movement, she provides this as a main example.

The fact that EA recruits so heavily and dogmatically among college students really bothers me.

Hearing that made me think about a criticism of the organization of EA groups for university students made last year by Dave Banerjee, former president of the student EA club at Columbia University. His was one of the most upvoted criticisms of such groups, and how they're managed, ever posted to the EA Forum. While Dave apparently realized what are presumably some of the same conclusions as Elizabeth about the problems with evangelical university EA groups, he did so with a much quicker turnaround than her. He shifted towards such a major update while still a university student, while it took her several years. I don't mention that so as to imply that she was necessarily more naive and/or idiotic than he was. From another angle, given that he was propagating a much bigger EA club than Elizabeth ever did, at a time when EA was being driven to grow much faster than when Elizabeth might've been more involved with EA movement/community building, Dave could have easily have been responsible for causing more harm. Therefore, perhaps he has perhaps been even a more naive idiot than she ever was. 

I've known other university students who were formerly effective altruists helping build student EA clubs, who quit because they also felt betrayed by EA as a community. Given that it's not like EA will be changing overnight, in spite of whoever considers it imperative some of it movement-building activities stop, there will be teenagers in the future, coming months, who may come through EA with a similar experience. Their teenagers who may be chewed up and spit out, feeling ashamed of their complicity in causing harm through propagating EA as well. They may not have even graduated high school yet, and within a year or two, they may also be(come) those effective altruists, then former effective altruists, who Elizabeth is anticipating and predicting that she would call naive idiots. Yet those are the very young people Elizabeth would seek to prevent from befalling harm themselves by joining EA in the first place. It's not evident that there's any discrete point at which they cease being those who should heed her warning in the first place, and instead become naive idiots to chastise. 

Elizabeth also mentions how she became introduced to EA in the first place.

I'd read Scott Alexander's blog for a long time, so I vaguely knew the term effective altruist. Then I met one of the two co founders of Seattle EA on OkCupid and he invited me to the in person meetings that were just getting started and I got very invested.

As of a year ago, Scott Alexander wrote a post entitled In Continued Defense of Effective Altruism. While I'm aware he made some later posts responding to some criticisms of that one he made, I'm guessing he hasn't abandoned that thesis of that post in its entirety. Meanwhile, as one of, if not the, most popular blog associated with either the rationality or EA communities, one way or another, Scott Alexander may still be drawing more people into the EA community than almost any other writer. If that means he may be causing more harm by propagating EA than almost any other rationalist still supportive of EA, then, at least in that particular way Elizabeth has in mind, Scott may right now continue to be one of the most naive idiots in the rationality community. The same may be true of so many effective altruists Elizabeth got to know in Seattle. 

What I'm aware is a popular refrain among rationalists is: speak truth, even if your voice trembles. Never mind on the internet, Elizabeth could literally go meet hundreds of effective altruists or rationalists she has known in either the Bay Area, and Seattle, and tell them that for years they, too, were also naive idiots, or that they're still being naive idiots. Doing so could be how Elizabeth could prevent them from causing harm. In not being willing to say so, she may counterfactually be causing so much more harm by saying or doing so much less to stop EA from propagating than she knows that she can.

Whether it be Scott Alexander, or so many of her friends who have been or still are in EA, or those who've helped propagate university student groups like Dave Banerjee, or those young adults who will come and go through EA university groups by the year 2026, there are hundreds of people Elizabeth should be willing to call, to their faces, naive idiots. It's not a matter of whether she, or anyone, expects that'd work as some sort of convincing argument. That's the sort of perhaps cynical and dishonest calculation she, and others, rightly criticize in EA. She should tell all of them that, if she believes it, even if her voice trembles. If she doesn't believe that, that merits an explanation of how she considers herself to have been a naive idiot, but so many of them to not have been. If she can't convincingly justify, not just to herself, but others, why she was exceptional in her naive idiocy, then perhaps she should reconsider her belief that even she was a naive idiot.

In my opinion she, or so many other former effective altruists, were not just naive idiots. Whatever mistakes they made, epistemically or practically, I doubt the explanation is that simple. The operationalization here of "naive idiocy" doesn't seem like a decently measurable function of, say, how long it took before it was just how much harm someone was causing by propagating EA, and how much harm they did cause in that period of time. "Naive idiocy" here doesn't seem to be all that coherent an explanation for why so many effective altruists got so much, so wrong, for so long. 

I suspect there's a deeper crux of disagreement here, one that hasn't been pinpointed yet, by Elizabeth or Tim. It's one I might be able to discern if I put in the effort, though I don't have a sense of what it might've been either. I could, given that I still consider myself an effective altruist, though I ceased to be an EA group organizer myself last year too, on account of me not being confident in helping grow the EA movement further, even if I've continued participating in it for what I consider its redeeming qualities. 

If someone doesn't want to keep trying to change EA for the better, and instead opts to criticize it to steer others away from it, it may not be true that they were just naive idiots before. If they can't substantiate their formerly naive idiocy, then to refer to themselves as having only been naive idiots, and by extension imply so many others they've known still are or were naive idiots too, is neither true nor useful. In that case, if Elizabeth would still consider herself to have been a naive idiot, that isn't helpful, and maybe it is also a matter of her, truly, being too hard on herself. If you're someone who has felt similarly, but you couldn't bring yourself to call so many friends you made in EA a bunch of naive idiots to their faces because you'd consider that false or too hard on them, maybe you're being too hard on yourself too. Whatever you want to see happen with EA, us being too hard on ourselves like that isn't helpful to anyone. 

I'm tentatively interested in participating in some of these debates. That'd depend on details of how the debates would work or be structured.

This is a section of a EAF post I've begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn't intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I've still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.

@JWS 🔸 self-describes as "anti-Bay Area EA." I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn't limited to the Bay Area. It's bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically "Bay Area EA" culture entails the stereotypes-both accurate and misguided--of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.

Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn't only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn't @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn't anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).

In short, EA is an Anglo-American movement and philosophy, if it's going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself "pro-" or "anti-" Bay Area EA is pointless.

I'm working on some such resources myself. Here's a link to the first one, what is up to now a complete list of posts, of the still ongoing series, on the blog Reflective Altruism.

https://docs.google.com/document/d/1JoZAD2wCymIAYY1BV0Xy75DDR2fklXVqc5bP5glbtPg/edit?usp=drivesdk

To everyone on the team making this happen:

This seems like it could potentially one day become the greatest thing to which Open Philanthropy, Good Ventures and--by extension--EA ever contribute. Thank you!

To others in EA who may understandably be inquisitive about such a bold claim:

Before anyone asks, "What if EA is one day responsible for ending factory farming or unambiguously reducing existential risk to some historic degree? Wouldn't that be even greater?"

Yes, those or some of the other highest ambitions among effective altruists might be greater. Yet there's so much less reason to be confident EA can be that fulcrum for ending those worst of problems. Ending so much lead exposure in every country on Earth could be the most straightforward grand slam ever.

When I mention it could be the greatest, though, that's not just between focus areas in EA. That's so meta and complicated that the question of which focus area has the greatest potential to do good has still generally never been resolved. It's sufficient to clarify this endeavour could have the potential to be the greatest outcome ever accomplished within the single focus area in EA of global health and development. It could exceed the value of all the money that has ever flown through EA to any charity Givewell has ever recommended.

I'll also clarify I don't mean "could" with that more specific claim in some euphemistic sense, of making some confident but vague claim to avoid accountability in making a forecast. I just mean "could" in the sense that it's a premise worth considering. The fact there's even a remote chance this could exceed everything achieved with EA to treat neglected tropical diseases is remarkable enough.

Load more