This is in response to Sarah Constantin's recent post about intellectual dishonesty within the EA community.
I roughly agree with Sarah's main object level points, but I think this essay doesn't sufficiently embody the spirit of cooperative discourse it's trying to promote. I have a lot of thoughts here, but they are building off a few existing essays. (There's been a recent revival over on Less Wrong attempting to make it a better locus for high quality discussion. I don't know if it's especially succeeded, but I think the concepts behind that intended revival and very important)
- Why Our Kind Can't Cooperate (Eliezer Yudkowsky)
- A Return to Discussion (Sarah Constantin)
- The Importance of [Less Wrong, OR another Single Conversational Locus] (Emphasis mine) (Anna Salamon)
- The Four Layers of Intellectual Conversation (Eliezer Yudkowsky)
I think it's important to have all three concepts in context before delving into: - EA has a lying problem (Sarah Constantin)
I recommend reading all of those. But here's a rough summary of what I consider the important bits. (If you want to actually argue with these bits, please read the actual essays before doing so, so you're engaging with the full substance of the idea)
- Intellectuals and contrarians love to argue and nitpick. This is valuable - it produces novel insights, and keeps us honest. BUT it makes it harder to actually work together to achieve things. We need to understand how working-together works on a deep enough level that we can do so without turning into another random institution that's lost it's purpose. (See Why Our Kind... for more)
- Lately, people have tended to talk on social media (Facebook, Tumblr, etc) rather than in formal blogs or forums that encourage longform discussion. This has a few effects. (See A Return to Discussion for more)
- FB discussion is fragmented - it's hard to find everything that's been said on a topic. (And tumblr is even worse)
- It's hard to know whether OTHER people have read a given thing on a topic.
- A related point (not necessarily in "A Return to Discussion" is that social media incentives some of the worst kinda of discussion. People share things quickly, without reflection. People read and respond to things in 5-10 minute bursts, without having time to fully digest them.
- Having a single, long form discussion area that you can expect everyone in an intellectual community to have read, makes it much easier to building knowledge. (And most of human progress is due, not to humans being smart, but being able to stand on the shoulders of giants). Anna Salamon's "Importance of a Single Conversational Locus" is framed around x-risk, but I think it applies to all aspects of EA: the problems the world faces are so huge that they need a higher caliber of thinking and knowledge-building than we currently have in order to solve.
- In order to make true intellectual progress, you need people to be able to make critiques. You also need those critics to expect their criticism to in turn be criticized, so that the criticism is high quality. If a critique turns out to be poorly thought out, we need shared, common knowledge of that so that people don't end up rehashing the same debates.
- And finally, (one of) Sarah's points in "EA has a lying problem" is that, in order to be different from other movements and succeed where they failed, EA needs to hold itself to a higher standard than usual. There's been much criticism of, say, Intentional Insights for doing sketchy, truth-bendy things to gain prestige and power. But that plenty of "high status" people within the EA community do things that are similar, even if to a different degree. We need to be aware of that.
I would not argue as strongly as Sarah does that we shouldn't do it at all, but it's worth periodically calling each other out on it.
Cooperative Epistemology
So my biggest point here, is that we need to be more proactive and mindful about how discussion and knowledge is built upon within the EA community.
To succeed at our goals:
- EA needs to hold itself to a very high intellectual standard (higher than we currently have, probably. In some sense anyway)
- Factions within EA needs to be able to cooperate, share knowledge. Both object level knowledge (i.e. how cost effective is AMF?) and meta/epistemic knowledge like:
- How do we evaluate messy studies
- How do we discuss things online so that people actually put effort into reading and contributing the discussion.
- What kinds of conversational/debate norms lead people to be more transparent.
- We need to be able to apply all the knowledge to go out and accomplish things, which will probably involve messy political stuff.
I have specific concerns about Sarah's post, which I'll post in a comment when I have a bit more time.
I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.
There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.
I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.
While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.
(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)
This is completely true.
There are at least a dozen people for whom this is true.
Issue 1:
The title and tone of this post is playing with fire, i.e courting controversy, in a way that (I think, but am not sure) undermines its goals.
A, there's the fact that describing these as "lying" seems approximately as true as the first two claims, which other people have mentioned. In a post about holding ourselves to high standards, this is kind of a big deal. Others have mentioned this.
B: Personal integrity/honesty is only one element you need to have a good epistemic culture. Other elements you need include trust, and respect for people's time, attention, and emotions.
Just as every decision to bend the truth has consequences, every decision to inflame emotions has consequences, and these can be just as damaging.
I assume (hope) it was a deliberate choice to use a provocative title that'd grab attention. I think part of the goal was to punish the EA Establishment for not responding well to criticism and attempting to control said criticism.
That may not be a bad choice. Maybe it's necessary but it's a questionable one.
The default world (see: modern politics, and news) is a race to the bottom of outrage and manufactured controversy. People love controversy. I lo... (read more)
Hi everyone! I’m here to formally respond to Sarah’s article, on behalf of ACE. It’s difficult to determine where the response should go, as it seems there are many discussions, and reposting appears to be discouraged. I’ve decided to post here on the EA forum (as it tends to be the central meeting place for EAs), and will try to direct people from other places to this longer response.
Firstly, I’d like to clarify why we have not inserted ourselves into the discussion happening in multiple Facebook groups and fora. We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member). We are aware that this might come across as “radio silence” or lack of concern for the criticism at hand—but that is not the case. Whenever there are legitimate critiques about our work, we take it very seriously. When there are accusations of intent to deceive, we do not take them lightly. The last thing we want to do is respond in haste only to realize that we had not given the criticism enough consideration. We also want to allow the... (read more)
When there are debates about how readers are interpreting text, or potentially being misled by it, empirical testing (e.g. having Mechanical Turk readers view a page and then answer questions about the topic where they might be misled) is a powerful tool (and also avoids reliance on staff intuitions that might be affected by a curse of knowledge). See here for a recent successful example.
Issue 2: Running critical pieces by the people you're criticizing is necessary, if you want a good epistemic culture. (That said, waiting indefinitely for them to respond is not required. I think "wait a week" is probably a reasonable norm)
Reasons and considerations:
a) they may have already seen and engaged with a similar form of criticism before. If that's the case, it should be the critic's responsibility to read up on it, and make sure their criticism is saying something new. Or, that it's addressing the latest, best thoughts on the part of the person-being-criticized. (See Eliezer's 4 layers of criticism)
b) you may not understand their reasons well. Especially with something off-the-cuff on facebook. The principle of charity is crucial because our natural tendency is to engage with weaker versions of ideas.
c) you may be wrong about things. Because our kind have trouble cooperating because we tend to criticize a lot, it's important for criticism of Things We Are Currently Trying to Coordinate On to be made as-accurate-as-possible through private channels before unleashing the storm.
Controversial things are intrinsically "public facing" (see: Scott Alexander'... (read more)
I note Constantin's post, first, was extraordinary uncharitable and inflammatory (e.g. the title for the section discussing Wiblin's remark "Keeping promises as a symptom of Autism", among many others); second, these errors were part of a deliberate strategy to 'inflame people against EA'; third, this strategy is hypocritical given the authors (professed) objections to any hint of 'exploitative communication'. Any of these in isolation is regrettable. In concert they are contemptible.
{ETA: Although in a followup post Constantin states her previous comments which were suggestive of bad faith were an "emotional outburst", it did not reflect her actual intentions either at the time of writing or subsequently.}
My view is that, akin to Hostadter's law, virtues of integrity are undervalued even when people try to account for undervaluing them: for this reason I advocate all-but lexical priority to candour, integrity, etc. over immediate benefits. The degree of priority these things should be accorded seems a topic on which reasonable people can disagree: I recommend Elmore's remarks as a persuasive defence of according these virtues a lower weight.
'Lower', however, s... (read more)
The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.
In brief:
The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity eva... (read more)
One bit of progress on this front is Open Phil and GiveWell starting to make public and private predictions related to grants to improve their forecasting about outcomes, and create track records around that.
There is significant room for other EA organizations to adopt this practice in their own areas (and apply it more broadly, e.g. regarding future evaluations of their strategy, etc).
This is part of my thinking behind promoting donor lotteries: by increasing the effective size of donors, it lets them more carefully evaluate organizations and opportunities, providing better incentives and resistance to exploit... (read more)
Prediction-making in my Open Phil work does feel like progress to me, because I find making predictions and writing them down difficult and scary, indicating that I wasn't doing that mental work as seriously before :) I'm quite excited to see what comes of it.
This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that's indistinguishable from how a dedicated EA might act—but it's not a part of my identity anymore.
I've also met plenty of great EAs, and it's a shame that the poor interactions I've had overshadow the many good ones.
Part of what disturbs me about Sarah's post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromise on honesty and act non-cooperatively more in person than online. I'm sure that others have had better experiences, so if this isn't as prevalent in your experience, I'm glad! It's just that I could have used stronger examples if I had written the post, instead of Sarah.
I'm not comfortable sharing examples that might make people identifiable. I'm too scared of social backlash to even think about whether outing specific people and organizations would even be a utilitarian thing for me to do right now. But being laughed at for being an "Effective Kantian" because you're the only one in your friend group who wasn't willing to do something illegal? That isn't fun. Listening to hardcore EAs approvingly talk about how other EAs have manipulated non-EAs for their own gain, because doing so might conceivably lead them to donate more if they had more resources at their disposal? That isn't inspiring.
I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.
One very object-level thing which could be done to make longform, persistent, not hit-and-run discussion in this particular venue easier: Email notifications of comments to articles you've commented in.
There doesn't seem to be a preference setting for that, and it doesn't seem to be default, so it's only because I remember to come check here repeatedly that I can reply to things. Nothing is going to be as good at reaching me as Facebook/other app notifications on my phone, but email would do something.
My thoughts on this are too long for a comment, but I've written them up here - posting a link in the spirit of making this forum post a comprehensive roundup: http://benjaminrosshoffman.com/honesty-and-perjury/
I have very mixed feelings about Sarah's post; the title seems inaccurate to me, and I'm not sure about how the quotes were interpreted, but it's raised some interesting and useful-seeming discussion. Two brief points:
Copying my post from the Facebook thread:
Some of the stuff in the original post I disagree on, but the ACE stuff was pretty awful. Animal advocacy in general has had severe problems with falling prey to the temptation to exaggerate or outright lie for a quick win today. especially about health, and it's disturbing that apparently the main evaluator for the animal rights wing of the EA movement has already decided to join it and throw out actually having discourse on effectiveness in favour of plundering their reputation for more donations today. A mistake ... (read more)
ACE's primary output is its charity recommendations, and I would guess that it's "top charities" page is viewed ~100x more than the leafleting page Sarah links to.
ACE does not give the "top charity" designation to any organization which focuses primarily on leafleting, and e.g. the page for Vegan Outreach explicitly states that VO is not considered a top charity because of its focus on leafleting and the lack of robust research on that:
You are proposing that ACE says negative things on its most prominent pages about leafleting, but left some text buried in a back page that said good things about leafleting as part of a dastardly plot to increase donations to organizations they don't even recommend.
This seems unlikely to me, to put it... (read more)
Good piece
While I think it's good to expect people to have read the same central set of works, I do think we lose out by not being able to snythesisynthesisese discussions. Why isn't there a single community post with the state of the art on this discussion and where key disagreements are? It's understandable that I should have to find the various articles, but why not make it easier for me?
Since there are so many separate discussions surrounding this blog post, I'll copy my response from the original discussion:
I’m grateful for this post. Honesty seems undervalued in EA.
An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s ... (read more)
In the interest of completeness: Sarah posted a follow-up on her post. Reply to Criticism on my EA Post.
I was definitely disappointed to see that post by Sarah. It seemed to defect from good community norms such as attempting to generously interpret people in favour of quoting people out of context. She seems to be applying such rigourous standards to other people, yet applying rather loose standards to herself.
I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.
I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.
Overall, though, I think it's just plain wrong to argue for an unexamined idea of hones... (read more)
As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:
1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?
2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.
2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".
3) Is it a good idea to support (or not object) to others ... (read more)
Note to casual viewers that the content of this is not what the title makes it sound like. He's not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it's been hard.
From the recent Sarah Constantin post
... (read more)Note that the proposed norm within EA of following laws at least in the US is very demanding-see this article. A 14th very common violation I would add is not fully reporting income to the government like babysitting: "under the table" or "shadow economy." A 15th would be pirated software/music. Interestingly, lying is not illegal in the US, though lying under oath is. So perhaps what we mean is be as law-abiding as would be socially acceptable to most people? And then for areas that are more directly related to running organizations (n... (read more)
It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.
It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect... (read more)