Hide table of contents

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to (potentially) share them with him & in turn share his answers here.

Edit: As noted in this comment, I'm just a student at Stanford & Leif's kindly agreed to chat with me about EA.

Thank you, M, for sharing this with me & encouraging me to connect.

New Answer
New Comment

11 Answers sorted by

I am a GiveWell donor because I want to spend money to improve the world. Should I do something else with that money instead? If so, what?

After reading his article the answer to this was not at all obvious. Super interested to hear what he says about it - great question!

  1. What do you donate to?
  2. What is your take on GiveDirectly?
  3. Do you think Mariam is not a "real, flesh-and-blood human", since you never met her?
  4. Do you think that spending money surfing and travelling the world while millions are starving could be considered by some a suboptimal use of capital?

I don't think the third question is a good faith question. 

This is the context for how Wenar used the phrase: "And he’s accountable to the people there—in the way all of us are accountable to the real, flesh-and-blood humans we love.""

I interpret this as "direct interaction with individuals you are helping ensures accountability, i.e, they have a mechanism to object to and stop what you are doing". This contrasts with aid programs delivered by hierarchical organisations where locals cannot interact with decision makers, so cannot effectively oppose programs they do not want, eg - the deworming incident where parents were angry.

Your article concludes with an anecdote about your surfer friend Aaron who befriended a village and helped upgrade their water supply.  Is this meant to be an alternative model of philanthropy?  Would you really encourage people to do this on a large scale? How would you avoid  this turning this into voluntourism, where poor people in the third world have to pretend to befriend wannabe white saviours in exchange for money? 

And on a slightly more snarky note, for all the limitations of EA analysis, is it really worse at quantifying the positives and negatives than his pronouncements about projects' potential based on conversations with white Westerners in Bali? (that means both his well-intentioned surfer friend and the jaded twentysomethings enjoying drunken post-voluntourism R&R whose cynicism apparently transformed his entire view of the value of aid to people he never spent a moment talking to!)

  1. You mentioned that one harm of insecticide-treated bed nets is that if people use them as fishing nets, that could cause harm to fish stocks. You say that GiveWell didn't take that into account in its cost-effectiveness calculations. But according to e.g. https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/, they did take that into account, they just concluded that the harm was very small in comparison to the benefits. Can you clarify what you meant when you say GiveWell didn't take that into account?
  2. If you're concerned so much about harm to fish stocks, do you think it would make more sense to focus your efforts on supporting charities focused on fish-related issues directly?
  3. GiveWell seems, by your admission,  to spend a lot of time thinking about second-order effects and possible harms of their preferred charities' interventions, and your criticism seems that even the amount they do is not sufficient. Okay, that seems fair enough. Do you think there are any charities or philanthropic efforts that do pay sufficient attention to the harms and second-order effects? Or do you think that all philanthropy is like this?
  4.  In particular, you talk about your friend Aaron, whose intervention you seem to like. Do you think Aaron thought about the second-order effects and harms of what he was doing? Do you think he's come up with a way of helping others that has less risk of causing harm, and if so is there a way to scale that up?
  5. If GiveWell were to take your advice and focus more on possible harms, is there a risk of overcorrecting, and spending lots of time and resources studying harms that are too small or unlikely to be worth the effort? (Some people think this has already happened in other contexts, e.g. some argue that excessive safety regulation of nuclear power that makes nuclear power plants very expensive to build, even though nuclear power is actually safer than other forms of power)
  1. What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number?

  2. Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse - defining terms unique is very common.

  3. Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional?

WHILE SBF’S MONEY was still coming in, EA greatly expanded its recruitment of college students. GiveWell’s Karnofsky moved to an EA philanthropy that gives out hundreds of millions of dollars a year and staffed up institutes with portentous names like Global Priorities and The Future of Humanity. Effective altruism started to synergize with adjacent subcultures, like the transhumanists (wannabe cyborgs) and the rationalists (think “Mensa with orgies”). EAs filled the board of one of the Big Tech companies

  1. Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?

Even if the thing you gave a 57 percent chance of happening never happens, you can still claim you were right.

Ask him how many people he has killed.

(partly but not entirely a joke)

With him seemingly convincing people to not give to Givewell, I find this interesting (if perhaps rude and on the nose). I doubt he genuinely thinks Givewell charities kill more people than they save.

Is there a process he thinks a charity evaluator could feasibly follow that would leave the evaluator in a position to recommend people donate to an aid organisation, without thereby doing anything morally problematic?

Relatedly, is there any feasible research process by which he thinks you could come to know a particular aid organisation's work is net beneficial?

Probably slightly breaking my own rule about avoiding gotcha questions. But these also seem potentially useful.

I was surprised by your "dearest" and "mirror" tests.

Call the first the “dearest test.” When you have some big call to make, sit down with a person very dear to you—a parent, partner, child, or friend—and look them in the eyes. Say that you’re making a decision that will affect the lives of many people, to the point that some strangers might be hurt. Say that you believe that the lives of these strangers are just as valuable as anyone else’s. Then tell your dearest, “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”

Or you can do the “mirror test.” Look into the mirror and describe what you’re doing that will affect the lives of other people. See whether you can tell yourself, with conviction, that you’re willing to be one of the people who is hurt or dies because of what you’re now deciding. Be accountable, at least, to yourself.

These tests seem to rule out many actions that seem desirable to undertake. For example:

  1. Creating and distributing a COVID vaccine: there is some small risk of serious side effects and it is likely that a small number of people will die even though many, many more will be saved. So this may not pass the "dearest" and "mirror" tests. Should we not create and distribute vaccines?
  2. Launching a military operation to stop genocide: A leader may need to order military action to halt an ongoing genocide, knowing that some innocent civilians and their own soldiers will likely die even though many more will be saved. This may not pass the "dearest" and "mirror" tests. Should we just allow the genocide?

Do you bite the bullet here? Or accept that these tests may be flawed?

In your article, you write:

If we decide to intervene in poor people's lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions.

Echoing other users' comments, what do you think about EA global health and development (GHD) orgs' attempts to empower beneficiaries of aid? I think that empowerment has come up in the EA context in two ways:

  • Letting beneficiaries make their own decisions: GiveDirectly is a longstanding charity recommendation in the GHD space, and it empowers people by giving them cash and the freedom to spend it however they see fit.
  • Using beneficiaries' moral preferences to guide donors' decisions: IDinsight has published a series of research papers (in 2018 and 2021) that ask potential GD beneficiaries about their preferences between saving lives and having more money, and the beneficiaries say they care about saving lives more. Based on this, GiveWell and Open Phil have prioritized healthcare interventions over GD-style cash transfer programs.

Also, who do you understand to be the primary "beneficiaries" here -- toddlers, families, communities, nations, all of the above?

IIRC, and based on GiveWell rationales, the bulk of the benefit from its recommended charities comes from saving the lives of under-5s. If one thinks the beneficiaries are toddlers, how does one "shift[] our power to them"? Does your answer have implications for domestic aid programs, under which (e.g.,) we pay thousands per kid per year for healthcare with no real power-shifting option?

My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG - why is the central idea wrong (if you indeed think that)?

I thought he spelled out his ETG criticism quite clearly in the article, so I’ll paraphrase what I imbibed here.

I think he would argue that, for the same person in the same job, donating X% of their money is a better thing. However, the ETG ethos that has hung around in the community promotes seeking out extremely high-paying jobs in order to donate even more money. These jobs often bring about more harms in turn (both in an absolute sense but possibly also to the point that ETG is net-negative, for example in the case of SBF), especially if we live in an economic system that rewards behaviour that profits off negative externalities.

6
Jason
Then perhaps one question could be whether he thinks ETG as an idea is per se problematic, or whether the main point is ~ that it needs to be better channeled / have more robust guardrails.  For instance, in conventional morality, being a neurosurgeon is seen as prosocial, beneficial activity (even though they make some serious coin, at least in the US). One might think that encouraging people to become neurosurgeons is benign at worst. In contrast, even apart from SBF, one might think crypto is net-negative for the world, and that at least certain pathways to getting rich in crypto rely on behavior that is morally problematic. (I am not expressing an opinion on that perspective beyond a view that is plausible on its face.) Furthermore, one might believe that ETG creates a fertile ground for motivated reasoning, such as dismissing the harms of crypto because it produces a lot of money to further EA aims. That seems much less of a concern for neurosurgery, even though I actually do have some criticisms of that field too!
1
huw
I think this is a good reframing that would reveal something more interesting!
2
David T
There's also an implicit criticism of the idea that merely funding something makes you as responsible for outcomes and altruistic as the people using your money to deliver outcomes [even if it's a small fraction of your enormous disposable income], and the related (but considerably less fashionable in EA than it used to be) idea that ETG delivers more value than direct work
3
Jason
One related question might be what he would recommend to [1] individuals whose talents, interests, opportunities, resources, and so forth don't line up well with direct work, and [2] those who try to break into direct work, but are unable to do so due to resource constraints.

Weinar's critique of GiveWell raises two sets of questions for me -- one on materiality, and one on deference.

Materiality

One strand of criticism relates to GiveWell possibly withholding information from donors to whom it is recommending charitable opportunities -- e.g., the number of lives lost as a result of the intervention. What standard would Weinar apply in determining whether a downside is sufficiently material that it needs to be disclosed to prospective donors? Also, does he think GiveWell et al. perform worse at disclosing potential downsides than other charities / charitable recommenders, or is this critique broadly applicable to charities writ large?

Another strand suggests that GiveWell hasn't done enough due diligence in investigating potential harms. (Arguably similar logic would apply for indirect benefits as well.) For GiveWell to do more research/analysis would presumably result in more money going to privileged Western insiders and less money going to work in developed countries. How should GiveWell decide when enough has been done?

My initial response to both strands would be that disclosure is material when there is a reasonable probability that disclosing information about downsides (plus any previously-undisclosed upsides of comparable magnitude) would change a significant number of donor decisions on where to donate. Likewise, further research would generally be material if there were a reasonable probability of changing GiveWell's recommendation or significant donor action. But one challenge here is: which donors? Do we look to donors who are roughly in alignment with GiveWell's basic moral framework? All potential donors in the country in which it is fundraising?

Deference

It seems almost inevitable that donors have to defer to someone. For a small-time donor like myself, travelling to research possibilities would consume my available budget. And there's no ex ante reason to think that people with more to donate are somehow more likely to intuit the correct donation opportunities themselves.

Weinar discusses the downsides that he perceives to deferring to GiveWell. One could instead defer to some version of Weinar's friend Aaron. But there are many different Aarons in the world; some would recommend good work, some would recommend poor work -- and Weinar seems to think a lot of foreign aid projects are problematic. I only know a few Aarons. If I went down this path, how worried should I be that none of my Aarons actually have good ideas, or that I am not very skilled at picking one of the better Aarons to defer to?

Likewise, there seems to be a suggestion that deferring to the people who live in the country is often a good idea. In the country where I live (the United States), a lot of local ideas about how to improve things are pretty bad. A glance at our political spectrum confirms that a significant majority of Americans agree that many ideas of at least 40% of other voters are pretty horrid (even though they disagree which ideas those are). I'd worry that a foreign donor deferring to some subset, or even some majorities, of the American people could easily make things considerably worse. How would Weinar suggest Western donors decide which local groups to defer to?

Comments3
Sorted by Click to highlight new comments since:

Questions designed to trip him up or teach him a lesson are emotionally tempting, but don't seem very useful to me. Better to ask him how he thinks practical stuff can be improved, or what he thinks particularly big mistakes of GiveWell or other EA orgs were in terms of funding decisions, not broad philosophy (we've all heard standard objections to consequentialism before.) I suspect he won't have any good suggestions, on the EDIT former (originally said "latter" by mistake). but you never know.

Hi Arden, did you get answers to the questions in the comments yet?

Why did he link to a $20 book for Famine, Affluence, and Morality when the PDF is easily available online for free? 

Curated and popular this week
Relevant opportunities