Note: if you've come here because you would like to give your first impression of effective altruism, then introductions are here and here.

Note2: Robin Hanson has outlined some problems with exposing misalignment between others' actions and professed beliefs about charity.

---

Today, Robin Hanson wrote a blog post that explains the importance of outside criticism.

Friendly local criticism isn't usually directed at trying to show a wider audience flaws in your arguments. If your audience won’t notice a flaw, your friendly local critics have little incentive to point it out. If your audience cared about flaws in your arguments, they’d prefer to hear you in a context where they can expect to hear motivated capable outside critics point out flaws

...

If you are the one presenting arguments, and if you didn’t try to ensure available critics, then others can reasonably conclude that you don’t care much about persuading your audience that your argument lacks hidden flaws.

This raises the question: who are the best critics of effective altruism?

Ben Kuhn has given some criticism but he's an insider. (Since countered by Katja.) Geuss has written some helpful criticism but he's also involved with effective giving. Giles has passed on some thoughts from a friend. These critics have been heroic but they are few in number. It figures, as most of us aren't incentived to say bad things about a movement with which we affiliate, and if we were forced to, we might still pull some punches.

So what about outsiders? Well, 80,000 Hours have received some criticism on earning to give. They also debated some socialists. But these discussions were brief and narrowly focussed clashes between entrenched political ideologies. Others have targeted us for criticism that was so vitriolic that it was hard to find the constructive parts, such as William Schambra, Ken Berger and Robert Penna and the always sarcastic RationalWiki. Edit: also some criticism by Scott Walter.

So several years into our movement, that's all we have to show for criticism. A few insiders and a few fanatics? It's not to say we can't harvest some insights from there - by god we should try. But one would hope we have more.

If we cast the net wider, Warren's son Peter Buffett has debated William MacAskill on the effectiveness of charity, which is kind-of cool. There are more general aid critics: William Easterly, who is a fairly thoughtful economist and Dambisa Moyo, who I know less about. But these they don't really get to the heart of what we care about - if most aid is ineffective, then it would just be important to research it even harder.

Alternatively, we can look at more narrowly focused critics. LessWrong is often mentioned as a useful source for criticism, and it has usefully challenged philosophical positions held by some effective altruists. Its founder, Eliezer Yudkowsky has challenged hedonistic utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide. But these mostly address utilitarians and spare other effective altruists. Of course, Eliezer no outsider to effective altruism - he played some part in founding it. The most upvoted post on LessWrong of all time was in fact feedback from Holden Karnofsky about its sponsor-organisation MIRI. Again, the relevance of this to most EAs is a stretch.

In turn, Holden Karnofsky has recieved suggestions for GiveWell might react to philosophical considerations by LessWrong veterans like Paul Christiano, Carl ShulmanEliezer and Nick Beckstead. Again, all insiders.

So here's how I sum up our problem. Almost all of our critics are insiders. Barring a couple of heroic attempts at self-criticism, we've primarily attracted criticism about donating and earning to give. We've also offended a couple of fanatics, and I don't have a strong view on whether we've learned from those. This is unsurprising. Taking self-criticism is hard and endorsing it or writing it is harder. Eliezer would say it feels like shooting one of your own men. Scott Siskind says, "Criticizing the in-group is a really difficult project I’ve barely begun to build the mental skills necessary to even consider. I can think of criticisms of my own tribe. Important criticisms, true ones. But the thought of writing them makes my blood boil."

But criticism seems especially important now as effective altruism is growing fast, our culture is starting to consolidate on the Facebook group and here and as we model it in the popular talks and introductory materials that we give to new community members.

To develop the effective altruist movement, it's essential that we ask people how we've failed, or how our ideas are inadequate.

So an important challenge for all of us is to find better critics.

Let me know if there's any big criticm that I've missed, or if you know someone who can engage with and poke holes in our ideas.

Related: The perspectives on Effective Altruism we Don't Hear by Jess Whittlestone. The Evaporative Cooling of Group Beliefs

Comments72
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Scott Alexander writes about the motte and bailey doctrine: http://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/

Basically, people will retreat to obvious platitudes (the motte) when defending their position, when in fact they're actually trying to promote more controversial ideas (the bailey). The motte for EA is "doing the most good" and the bailey is, well, everything else we promote. Ideally the place to launch criticism is the bailey. Unfortunately, a lot of the criticism has been directed to the motte, which leads to bizarre statements like "well maybe suffering isn't bad, we don't want everyone to be happy all the time" or "it's impossible to know which things are better than others". This may be part of the reason much of the criticism has fallen flat so far.

We have been criticised by people who have encountered us briefly for things like:

  • Being excessively critical of others, especially too quickly and without establishing a relationship first
  • Not being friendly, focussing too much on trying to prove how smart we are
  • Not being diverse enough (especially on gender, racial or religious lines, and to a lesser extent socioeconomic, political, cultural and academic discipline)
    • As a result, being unwelcoming to people who are either superficially or substantively different to existing participants
    • As a result,
... (read more)
2
Evan_Gaensbauer
One explanation for this I've encountered is just that effective altruism has been packaged by people of one sort that's intuitive to people of the same sort. For example, the emphasis on quantification of social goods appeals to students of economics, and the idea of using new methods of reasoning and calculation to uncover greater effectiveness appeals to students of computer science, mathematics, and philosophy. Those groups tend to be largely dominated by young, white men to begin with. Additionally, effective altruism originated with an academic and secular approach to ethics, a discipline that tends to be followed by less religious people. Further, it doesn't seem a coincidence that effective altruism, what with its focus on philanthropy, has primarily gained traction in wealthier countries in the Anglosphere. In essence, the communites which first fed the numerical growth of effective altruism aren't, and haven't been, very diverse to begin with. So, effective altruism may not be so much at fault by this point for failing to be diverse. Going forward, it's the responsibility of the movement to reach out, and broaden its horizons, with nuance and respect, to other communities. When poiting out the lack of diversity, do critics point out: * A) effective altruism doesn't appear diverse, so this image problem leads to miscommunication wherein a more diverse crowd never joins the movement, because they fear feeling awkward, or out of place? or * B) because of its lack of diversity, whether explicity or implicity, effective altruism seems to be offensive or ignorant of the needs and perspectives of individuals from more diverse or differentiated backgrounds? An early impression I had of effective altruism, in getting to know it, one might find it elitist, based on its community origins in elite universities, such as the Oxbridge school in Britain, or the Ivy League universities in the United States. HIstorically, I'd figure racial minorities, some religious
-5
Dale
2
Bernadette_Young
One way we can invite constructive criticism is by our response to criticisms we receive. That's difficult for an amorphous 'group', but perhaps some invited posts on the forum trying yo vine to grips with these? Perhaps CEA could publish their current thinking on areas where they have encountered criticism (both from 'inside' and 'outside' the organisation)?

Sanctimony.

I have not publicized my support of Effective Altruism at this point due to a fear of appearing arrogant.

One could argue that this applies as well to any altruistic or charitable movement but that isn't true: with EA there is also the tacit and easily verbalized assumption that my method of charitable giving is more effective than and thus superior to other people's, and that I'm therefore not only more generous but also more edified and generally intelligent than proponents of Ineffective Altruism, of which there are legion.

An example: I was co... (read more)

2
Evan_Gaensbauer
Full disclosure: below, I endorse the actions of the foundation called Charity Science, which is run by personal friends of mine, which is an issue aside from what would be my otherwise detached admiration of their work. My support of effective altruism isn't very publicized yet, either, for this reason. Actually, I'm not only afraid of appearing arrogant, but also I don't want to push ideas on others that I'm also afraid really are arrogant. That is, if my friends pointed out the arrogance of effective altruism, I wouldn't be too surprised if they were right about that. On the other hand, this fear may be more due to shame than humility. I might be afraid of looking too weird, and arrogant, but learning that sooner rather than later if it's rightly so I'm pursuing wrong ideas, I'd be wasting less time on them. Temporary embarrassment may be a small price to pay for learning a hard and proper lesson to not waste my time trying to do good on the wrong sort of lifestyle or activism. Peter Hurford has achieved relative success in direct research efforts, movement coordination, earning to give, and career decisions, among supporters of effective altruism. As far as I can tell, he is quite public with his support of effective altruism. However, he's engaged in other intellectual endeavors, sometimes criticizes effective altruism, and may court outside criticism from his social network as well. Additionally, in the past he's expressed an aversion to public-facing activism (at least off the Internet), e.g., leafleting for animal rights or veganism, and other causes. This is despite the fact that he supports animal welfare, and other causes embraced by effective altruism. I'm not aware if his aversion to activism directed toward strangers is for the same reasons that you and I don't publicize our support of effective altruism generally. I publicize my support of effective altruism by sharing links on the subject I like on social media, by bringing up object-level causes
2
Larks
If your friends would be that opposed, just don't like the page! Page likes just aren't valuable enough to cause you distress. Their main value is broadcasting to your friends anyway.

Another criticsm: the movement isn't as transparent as you might expect. (Remember, GiveWell was originally the Clear Fund - started up not necessarily because existing charitable foundations were doing the wrong thing, but because they were too secretive).

When compiling this table of orgs' budgets, I found that even simple financial information was difficult to obtain from organizations' websites. I realise I can just ask them - and I will - but I'm thinking about the underlying attitude. (As always, I may be being unfair).

Also, what Leverage Research are... (read more)

2
Benjamin_Todd
You can find all the data on 80k in our latest financial report and summary business plan: https://80000hours.org/about/credibility/evaluations/ Some even more current updates also here: https://groups.google.com/forum/#!forum/80k_updates
0
Evan_Gaensbauer
Having met Geoff Anders, the executive director of Leverage Research, and its other employees multiple times, and taking it upon myself specifically to ask pointed questions attempting to clarify their work, I can informally relay the following information[1]: * Leverage Research has and continues to successfully raise funds for its own financial needs without doing broad-based outreach to the effective altruism community at large. Leverage Research seems confident in its funding needs for the future to the point at which they won't be sourcing funds from the effective altruism community at large anytime soon. * Given that Leverage Research considers itself an early-stage, non-profit research organization, whose research goals pivot rapidly as its researchers update their minds on what is the best work they can do in the face of new evidence and developments, Leverage Research perceives it as difficult to portray their research at any given time in granular detail. That is, Leverage Research is so dynamic an organization at this point that for it to maximally disclose the details of its current research would be an exhaustive and constant effort. * Because of the difficutly Leverage Research has in expressing its research agenda accurately and precisely at any point in time, and because they've sourced their funding needs from private donors who were provided information to their own satisfaction, Leverage Research doesn't perceive it as absolutely crucial that they make specific financial or organizational information easily accessible, e.g., on its website. Personally, I haven't ever privately contacted Leverage Research seeking a disclosure of, or access to, such information. I have no knowledge of how such interactions may or may not have gone between other third parties, and Leverage Research. * The information available under the 'Our Team' heading on Leverage Research's website seems to overview only its employees who head its executive functioning, a
1
Giles
Thanks - I knew they were involved in the EA Summit but I didn't know they were the sole organizers. I also knew they weren't soliciting donations. I partially retract my earlier statement about them! (Also I hope I didn't cause anyone any offense - I've met them and they're super super nice and hardworking too)
0
Peter Wildeford
If you ever find .impact or Charity Science insufficiently transparent, let me know. I think the reason why you might have trouble finding income and expenses for those two orgs is that their official income and expenses are both essentially $0. You can see some of the financial flow into both orgs here.

The Boston Review held a Forum on Effective Altruism with some excellent criticism by academic, non-EAs.

I've recently been considering the analogy between Effective Altruism and the movement towards Evidence Based Medicine. The strongest similarity is that they both seek to use the conscientious application of high quality evidence to guide decision making. EBM has been criticised extensively, and many of its critics care deeply about the 'project' of medicine. It strikes me that these critiques could provide useful points to consider for EA. Maybe gathering and considering these would be a useful project for a CEA intern or similar.

There's also the feedback we get in talks, and the comments on all the articles and media attention we've gotten, which is very extensive. I've also presented on these topics in an academic setting.

And I asked for feedback here: https://www.facebook.com/wdcrouch/posts/10100610793427240?stream_ref=10

From this, I feel I know the most common criticisms of EA (as practiced, rather than in theory) pretty well.

  • doesn't appreciate the importance of systemic change
  • too focused on the measurable rather than unquantifiable benefits
  • smuggles utilitarian assumptions
... (read more)
5
AlasdairGives
I agree with these but they also reinforce the fact that "Effective altruism" as a category is quite unwieldy. "too focused on the measurable rather than unquantifiable benefits" - well, we have a huge chunk of people calling themselves EA's who mostly care about totally unquantifiable GCR research. Simiarly for Ultilitarianism and comparing animal and human suffering or other such notions. The "4 causes" commonly identified with EA have quite distinct weaknesses and it would be good (in my view) if people started assessing them on their own merits and not lumping them under one banner.
0
RyanCarey
Nitpick: You can't count the global catastrophes (yep, still zero for this decade) but you might be able to tell if it's working in other ways... Maybe. But yeah, I agree that that's the big weakness of GCR research.
0
Denkenberger🔸
Asteroid/comet impact, super volcanic eruptions, and even nuclear war risks are quantifiable within an order of magnitude or two: link. There are additional uncertainties in the cost and efficacy of interventions such as storing food or alternate foods. However, if you value future generations, one-three orders of magnitude uncertainty is not a significant barrier to making a quantified case.
3
RyanCarey
I agree that we get heaps of feedback from talks and media so i imagine you personally encounter as much criticism of EA as any other single person, and since it's often brief spots with a big audience, that a lot of it feels like it's not well thought through. Mightn't there be value in these criticisms? The systemic change criticism seems valid for EA five years ago. Now, GiveWell have started seriously analysing advocacy, GoodVentures have started funding it and FHI/CEA have started engaging policymakers so we've decided that these activities are crucial. Next time we could listen sooner right? Regarding smuggling in utilitarianism - well, there are related objections about to moralising, demandingness and self-sacrifice, which we've started to address in the last year or two, and which seem important. When we write in research articles or books, it seems like we are starting to get more careful about stating ethical assumptioms, which seems good. So, as non-smart or poorly-considered as this criricism may be, we've reasons to expect gold there, and any discussion should help the movement's self-awareness, psychological health and resiliency to further criticism.
0
Giles
Would they do it if we paid them?

I originally posted this on the Facebook thread that linked to this discussion, but that thread was deleted, so I'm reposting it here.

The strongest counterargument against EA that I know of is an attack on its underlying methodological individualism. By "individualism" here I mean analysing our actions as those of individuals deciding and acting in isolation. That is, looking at what we ought to do regardless of how this correlates with the behaviour of others.

To see why this could be a problem, take Downs paradox of voting, as illustrated here. ... (read more)

This past summer I was introduced to the Effective Altruism movement via The Center for Applied Rationality (CFAR). I love the CFAR crew and found a few kindred spirits who are also EA's.

I became interested in EA because I'm constantly running into charitable or grassroots organizations that are incredibly ineffective with fighting poverty and misinformation within minority communities, specifically in urban spaces like the Southside of Chicago. I believe that I've found some of the root causes and was hoping to glean some information or techniques for im... (read more)

1
Austen_Forrester
Great to hear from you, St. Claire! I sympathize with you in not understanding or being able to relate to the culture of the EA community. I feel the same way (ie. I'm religious, do industrial work, etc) and at first I was turned off of the community for that reason until I realized that the community will not grow and become more mainstream – and therefore it's ideas won't receive widespread acceptance – unless more diverse people join it. I also had a hard time understanding what people were talking about on this forum, but after a while you learn the terminology/unorthodox views and it becomes comprehensible. Actually, I've noticed the writing on the forum gradually becoming better and easier to read. Sounds like the NGO's you deal with aren't adequately measuring and evaluating their impact, and need technical assistance in that department. Unfortunately, I don't know where to get this information but hopefully someone on the forum can point you in the right direction.
0
RyanCarey
Hey AstClaire. Tahnks for your thoughts. If you're in San Francisco, with CFAR, then there are definitely events there, which will be announced on Facebook or here. If you're in Chicago, there are people there, and I'm not sure whether they meet. For what to contribute, here is one collection of activities. For foundational articles, if you click More on Effective Altruism in the sidebar, you will see a bunch. To pick 5: Efficiency Measures Miss the Point, Efficient Charity,- Do Unto Others, To Save the World... Go Work on Wall Street, Your Dollar Goes Further Overseas, Preventing Human Extinction. It'd be good to know what vocabulary, culture and causes are distant, to figure out whether there's some divide that's fundamental, or it's just the way we talk about things. EAs have usually thought about the causes a lot, so those views there are fairly stable, but people often aren't very careful about culture and vocabulary, so that could have a lot of room to change.

There is also this post by Scott Walter which I thought had some pretty good points.

1
Evan_Gaensbauer
My whole response to this essay was to be here in a single comment. However, it was too long for a single comment. I have thus decided I may share it on this forum as a post in its own right at a later date. I'm not sure it's really worth the effort, as it's of mild concern. Also, frankly, I'm afraid Mr. Walter might return to this forum, take what written here out of context and yet again conflate effective altruism as a slippery slope to Nazi-level eugenicism because of Peter Singer's association with effective altruism[1]. I agree Mr. Walter raises some serious points of concern, and legitimate criticism, such as effective altruism perhaps being too demanding of what personal lifestyle choices we its adherents may feel some peer pressure to make. [1] The full argument doesn't seem much less bizarre.

Criticisms:

Manageable, with further work:

  • Sorting out the ethics of animal suffering and catastrophic risk.
  • Weights marginal benefit heavily over systematic change. This may be inappropriate for very wealth philanthropists, or a group of pooled funders that may achieve the same effect.
  • Doesn't give appropriate weight to stopping problems before they become a crisis, especially for inter-generational effects. E.g., there hasn't been a lot of rigor in how EAs assess family planning.

Difficult to overcome:

  • devaluing of systematic change, and ignorin
... (read more)
2
Robert_Wiblin
I always find it quite strange to find people asserting that there are strong limits to growth when: i) most technologies that are possible haven't been invented yet ii) humans only occupy a tiny spec of the universe. It's more accurate to say there are limits to the rate of growth we can achieve - limits set by our ingenuity at any point in time.
0
MatthewDahlhausen
I don't think there is enough information to rule out the strong sustainability hypothesis. (This is not to say it is true, just that there isn't enough information to go either way). It's not just about what technologies we have to discover, it's about how fast they can be discovered, developed, and implemented to overcome problems. Technology is value-neutral; sometimes it solves problems, sometimes is makes new one, sometimes it does both. There are good reasons to think that we are much more robust to pressures that collapsed a lot of earlier civilizations, but the scale of the problems we face is also unprecedented. Biocapacity and energy throughput concerns have proved impressively stubborn to technical solutions in the last several decades. And we don't have a infinite amount of time to figure them out before they become serious collapse pressures.
1
Denkenberger🔸
I have a background in energy and I have studied these issues extensively, so I could write many pages, but I will try to be brief. We actually already have the technology to support 10 billion people at the US standard of living sustainably. It is good to think about the dynamics and embodied energy. But because typical renewable energy pays back the energy investment in about three years, if we just took the energy output of renewable energy and reinvested it, the amount of renewable energy production would grow at about 30% per year. Therefore, if we just reinvested our current renewable energy production, we would be at 100% renewable in a couple decades. The energy payback time of nuclear energy power plants (not mining) is more like one third of a year, so this is even more favorable. The HANDY paper does not consider technological improvement, which is probably appropriate for the timescale of past collapses (but note that in the longer term, our carrying capacity has gone from millions as hunter gatherers to billions now even with higher consumption per capita, so technological change is key). However, now that we have markets and R&D, we don't need the government to intervene to get to a sustainable solution quickly. The book "Limits to Growth" does consider technological improvement. But for some reason it estimates the carrying capacity of the earth is much below current consumption, perhaps because it does not recognize we can make nitrogen fertilizer with renewable hydrogen. I think the carrying capacity issue is why "Limits to Growth" nearly always predicts collapse. It is conceivable that we will overreact to these slow problems much more so than we did in 2008, and this could turn into a catastrophe. But more likely these resource constraints could reduce our resilience slightly to actual catastrophes. From a food perspective, there is around a 10% chance of nuclear winter this century, and when you include lesser catastrophes like regional nuclear w
0
DisposableUsername
How many scarce materials would be needed? How much land area? How much toxic waste would be produced, e.g. from solar elecontronic components? Energy investment is not the only input needed for renewables. (If you have a link that answers these and similar questions, that would be good.)
1
Denkenberger🔸
Thanks for the good questions. Wind power can use scarce materials, like rare earth permanent magnet generators. But it is possible just to use copper. Some photovoltaic technologies use scarce materials, but silicon is abundant. US per person primary energy use is ~10 kW: Energy Information Administration. “Annual Energy Review 2007.” If we start with renewable electricity, we need less primary energy, 4-8 kW, so say 6 kW. So 10 billion people require 60 trillion watts (TW). Current wind technology could provide 72 TW: Archer, C. and M. Jacobson. “Evaluation of global wind power.” JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, D12110, doi:10.1029/2004JD005462, 2005. Solar maximum on land is ~6,000 TW, but practical ~600 TW: Lewis, N.S. “Powering the Planet” California Institute of Technology presentation. If solar is 10% efficient and average solar radiation is 200 W/square meter, this requires ~0.1 acre/person: 5% of ecological footprint quota, but could be in desert or on rooftops. Of course we need to be careful with toxic waste, but landfills take up a negligible amount of land.

It seems worthwhile to differentiate between

  • criticisms of EA as an idea
  • criticisms of EAs, as individuals and as a movement

For example, EA individuals tend to have a left-wing bias (and when the survey's results are released hopefully we'll have data on this) but this isn't inherent to EA ideals - many EA ideals are quite right wing.

1
RyanCarey
You make criticisms of EA as individuals out like it's not interesting. But if you called it "EA in practice", then it would seem like something that can also be usefully criticised.
1
Larks
Sorry, that wasn't my intention. I think both can be valuable.

Here's the link to the Facebook group post in case people add criticisms there.

Glad you linked to Holden Karnofski's MIRI post. Other possibly relevant posts from the GiveWell blog:

There are more on a similar philosophical slant (search for "explicit expected value") but the above seem the most criticismy.

Great topic!

I think you missed this one from Rhys Southan which is lukewarm about EA: Art is a waste of time says EA

I don't see the Schambra piece as particularly vitriolic.

I don't know where to find good outside critics, but I think there's still value in internal criticism, as well as doing a good job processing the criticism we have. (I was thinking of creating a wiki page for it, but haven't got around to it yet).

Some self-centered internal criticism; I don't know how much this resonates with other people:

  • I posted some things on LW back in 2011 which
... (read more)

I think that you dismiss the critiques of Moyo and Easterly to quickly. They are critiques of top down aid, which EA is a champion of. Easterly in particular is critical of organizations which plan without understanding or asking the needs of the communities. Yes, this means that more research is needed, but of a drastically different kind. The problem with allowing organizations to assess themselves, is that they will not look for faults that they know exist. AMF, for example, stops testing communities for malaria after the lifespan of the donated ne... (read more)

This post seems pretty good to me, although I don't agree with all of it.

Patri recently linked to a post that was basically written directly to EAs:

On Saving the World and Other Delusions

I'm new to EA (and so effectively an outsider), and here are a few critiques that immediately come to mind, and which I have not seen mentioned elsewhere. The first two are simply aspects of EA that might render it unpalatable or too counter-intuitive for the masses:

  1. It would seem to follow that robbing from the rich and giving to the poor is ethically required. Imagine a man eating a feast with two dozen turkeys, and right next to him is a family full of starving children. If you could steal a turkey and give it to the family without anyone noticing, sho

... (read more)
2
Larks
No, because stealing is morally wrong in itself. Being an EA does not mean you have to endorse utilitarianism! (though some people do neglect the distinction). There are other aspects of morality, and respecting people's rights is one of them.
0
mhpage
EA is about (in part) extrapolating from what you would do in the near to what you should do in the far. The classic introduction hypothetical is the person drowning right in front of you. Most people's moral instincts are that they should suffer some costs to save the person's life. Ergo, they should suffer some costs to save starving people across the world. If you were to poll the world about whether people think it's right or wrong to steal one of the two dozen turkeys from the rich man and give it to the starving family, I suspect a sizable percentage would say it's right--or at least not wrong. You might not, but I hardly think that would be a rare response. My point is that extrapolating from that moral premise leads you to very counter-intuitive places.
1
Vincent_deB
It's better to say they were behaving suboptimally.
1
mhpage
Vincent, your comment goes to the point I was trying to make. If a rich person has two options: (a) give money to charity; or (b) buy a yacht, and chooses (b), we (or at least I) don't say he is behaving sub-optimally but that he is behaving unethically. Putting aside whether the Ivy league grad would enjoy working for a charity more than working in finance, how is her choice any different from the rich person's choice? If she takes a job at a charity (assuming one for which she is entirely replaceable), rather than taking a job in finance and giving away half of her salary, she is effectively throwing away the money should could have made in finance rather than donating it to charity. How is that different than taking the job and buying a yacht? It seems intuitively different because her motives are different, but that's irrelevant if you're a consequentialist (which seems like part of EA's fabric). From a marketing perspective, I see why we don't want to encourage stealing (which I still think could be done in a utility-maximizing manner) or claims that charity-minded Ivy league grads are as bad as yacht-buying millionaires, but if the only reason we don't go there is for marketing reasons, that seems like a problem. As to why this is a critique: I worry that the marketing strategy for EA whitewashes how radical its underlying premise truly is: that we owe the same duty to someone across the world as we do to someone right in front of us. Fully embracing that premise can lead us to extraordinarily counterintuitive (and unpalatable for many) places.
0
Vincent_deB
That I agree with. Obscuring/whitewashing it may be tactically wise however, and I think there's been some posts here about whether EA really is consequentialist.
1
RyanCarey
Hey mhpage. I think these are reasonable sorts of questions that lots of people are likely to suggest, so it's good to tackle them straight away. My responses would be: 1. Do you think that stealing from the rich is likely to be effective? It seems to me that it would probably lead you to get arrested and muck up your chances of helping for decades to come. At any rate, the idea that it would be compulsory would arise if you believed in 'utilitarianism' or had a related view that there are no 'supererogatory acts'. So that issue is central to those philosophies, rather than to effective altruism. 2. Effective altruists would be committed to the idea that it's a good way of helping people, and they promote it. Whether there's any 'ethical compulsion' is something that people will vary on depending on their philosophies. 3. There are still reasons to focus on students, even for the trivial reason that some of them will be wealthy later. There's also other ways of helping than donating funds. And effective altruists are pretty interested in meeting high-net-worth individuals anyhow.
1
mhpage
Thanks, Ryan. The distinction between EA and utilitarianism is not one I've sufficiently focused on, and it's a useful one to bear in mind. (With that said, I do think there are effective ways certain people could steal from the rich and give to the poor -- e.g., hackers.)
1
Vincent_deB
If there were I'd expect them to be well-researched and discussed by non-altruists. I haven't heard of any, and would expect to have.

I was googling "effective altruism arrogant" and it turned up a few links which I'm posting here so I don't lose them:

A thought: It seems like the EA community has a pretty strong focus on criticism, whether it's internal or external. Is it possible that this can itself be counterproductive? If the EA community is a fun place to be, that's good for both recruiting and retention, right?

Or to steelman Robin Hanson's recent post, if the EA community is ever to expand beyond high-scrupulosity, taking-abstract-moral-arguments-seriously, relentlessly-self-criticizing folks, it may need to find a way to help people achieve conventional self-interested goals like making friends... (read more)

0
Giles
I don't know if this is relevant to the criticism theme, but I found it was necessary for me to take some of Hanson's ideas seriously before becoming involved in EA, but his insistence on calling everything hypocrisy was a turn-off for me. Are there any resources on how we evolved to be such-and-such a way (interested in self+immediate family, signalling etc.) but that that's actually a good thing because once we know that we can do better?
0
Bitton
Off the top of my head: * The Selfish Gene by Richard Dawkins * The Origins of Virtue by Matt Ridley * Moral Tribes by Joshua Greene * Darwin's Dangerous Idea by Dennett * Freedom Evolves by Dennett * The Expanding Circle by Peter Singer They might mean that our evolved morality is "good" in a different sense than you're looking for. I haven't read them yet but The Ant and the Peacock, Moral Minds, Evolution of the Social Contract, Nonzero, Unto Others, and The Moral Animal are probably good picks on the subject.
0
Giles
Thanks - most of those names ring a bell but the Selfish Gene is the only one I've read. I guess some of the value of reading them is gone for me now that my mind is already changed? But I'll keep them in mind :-)

Eliezer Yudkowsky has challenged utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide.

I'm confused. AFAIK Yudkowsky's position is utilitarian, and none of the linked posts and sequences challenge utilitarianism. 3WC being an obvious example where only one specific branch - average preference utilitarianism - is argued to be wrong. The sequences are attempts to specify parts of the utility function and its behavior - even ... (read more)

2
RyanCarey
I've added the word 'hedonistic' and fixed a duplicate link. Maybe he's an atypical utilitarian, depending on our definitions. He's consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good. Yes, I found Eliezer's Metaethics sequence difficult but so did lots of people. Eliezer agrees:

To develop the effective altruist movement, it's essential that we ask people how we've failed, or how our ideas are inadequate. [...] So an important challenge for all of us is to find better critics. [...] Let me know if there's any big criticism that I've missed, or if you know someone who can engage with an poke holes in our ideas.

On the important challenge of finding better critics, my personal strategy is going to be to seek a greater quantity of critics. My rationale for this is that we won't know which criticism(s) is or are the best until they'... (read more)

What sort of criticism is the effective altruism community seeking? I notice much of the prior criticism cited are medium- or high-profile media criticism to effective altruism, in the form of a response to, e.g., William MacAskill's articles publised on Medium, or Peter Singer's TED talk. However, from the perspective of effective altruism itself, there isn't an incentive for it to be popular, or widely read. The important thing to effective altruism is that the criticism of its ideas are noted, and that its critics are engaged.

I ask because I have friend... (read more)

0
Giles
"Giles has passed on some thoughts from a friend" is one of the things cited, so if a particular criticism isn't listed we can assume it's because Ryan doesn't know about it, not that it's inherently too low status or something. I definitely want to hear what your friends have to say!

I wonder what you would get if you offered a cash prize to whoever wrote the "best" criticism of EA, according some criteria such as the opinion of a panel of specific EAs, or online voting on a forum. Obviously, this has a large potential for selection effects, but it might produce something interesting (either in the winner, or in other submissions that don't get selected because they are too good).

0
RyanCarey
Might be better to put up a cash prize for a suggested improvement rather than a critique then but maybe that's me being weak-spirited.
1
Peter McIntyre
I think one of my concerns with this would be the consistency and commitment effect created by incentivising a criticism, leading to someone seeing herself as an EA critic, or opposed to these ideas. Similar to companies having rewards for customers writing why it's their favourite company or product in the world. See also the American prisoners of war of China in the Korean war (I think), having small incentives to write criticisms of America or Capitalism. If it were being seriously considered, it'd be good to see some more done to work out if this would be a real consequence. Source: Influence, Cialdini.

I suppose I could be counted among those "outside critics" the topic mentioned. What surprised me, however, was that I expected to find an article eschewing the role of criticism and suggesting ways of removing critics, inside and outside the ranks of its members. This is what one often encounters in organizations that feel threatened by anything but the most complementary remarks on what they are doing. In addition, I stopped by this site for one, and only one, purpose. To briefly describe my thoughts about a world where "altruism" w... (read more)

Here is someone's initial exploration of a potential criticism: https://www.facebook.com/groups/effective.altruists/805005126222513/?notif_t=group_activity

(A poll about whether a nonprofit with a charismatic and intelligent leader and an unfalsifiable premise of how their charities does good would succeed in getting funding from the EA community.)

This piece by George Monbiot represents one strand of potential deep criticism, which is that many goods are incommensurable in value: http://www.monbiot.com/2014/07/24/the-pricing-of-everything/

This is a pretty common view in philosophy, and it would make the EA project much more limited in what it could achieve.

2
Davidmanheim
Running across this post quite a few years later, but our paper on the upper limit of value  addresses this incommensurability a bit, and cites Chang's "Incomparability and practical reason" which (we feel) addresses this fairly completely. Secondarily, Mobiot's claim isn't really incommensurability, it's that those with power (mostly economic power,) value things he cares about too little, that the environment, which is a public good, is underprotected by markets, and that humanity isn't cautious enough of the environmental risks.  All reasonable points, but not really incommensurability.
0
Robert_Wiblin
If many things are incommensurable at least we wouldn't be doing harm - our actions would often be merely neutral.
2
Davidmanheim
But you can still have a partial ordering for incommensurable goods - if the natural world is incommensurable with money, you can have states that are strictly worse/better, as well as incommensurable ones. And that still isn't neutral - it's better and worse on different dimensions.

Is there any particular topic, or set of ideas, from effective altruism criticism is being sought for? Alternatively, is there a particular format for the criticism that's being preferred? In particular, if I know some folks who might criticize effective altruism, I could ask them to publish their perspective on this forum. On the other hand, the threat of receiving downvotes, and being in the element of an intellectual opponent, seems to me a (fair) reason one might not want to publish criticism(s) of effective altruism on this forum.

I believe if it's exp... (read more)

0
RyanCarey
I think that it's useful to hear what people think of the whole idea. For reference, there's Singer's TED talk, Will's 'What is Effective Altruism?', and lots more intoductory essays. Apart from that, my suggestions for new critics would be: * It's probably better not to read a lot existing critiques at this stage because it might make you less imaginative. * To keep it constructive, it often helps if you can suggest what would count as an improvement. * You can email criticism to me or post it here (as a comment or as a new thread) * If you're writing something more substantial, you can get people to give feedback by sending it around as a google doc.
Curated and popular this week
Relevant opportunities