L

LGS

67 karmaJoined Jan 2023

Comments
22

Oh, I should definitely clarify: I find effective altruism the philosophy, as well as most effective altruists and their actions, to be very good and admirable. My gripe is with what I view as the "EA community" -- primarily places like this forum, organizations such as the CEA, and participants in EA global. The more central to EA-the-community, the worse I like the the ideas.

In my view, what happens is that there are a lot of EA-ish people donating to GiveWell charities, and that's amazing. And then the EA movement comes and goes "but actually, you should really give the money to [something ineffective that's also sometimes in the personal interest of the person speaking]" and some people get duped. So forums like this one serve to take money that would go to malaria nets, and try as hard as they can to redirect it to less effective charities.

So, to your questions: how many people are working towards bee welfare? Not many. But on this forum, it's a common topic of discussion (often with things like nematodes instead of bees). I haven't been to EA global, but I know where I'd place my bets for what receives attention there. Though honestly, both HLI and the animal welfare stuff is probably small potatoes compared to AI risk and meta-EA, two areas in which these dynamics play an even bigger role (and in which there are even more broken thermometers and conflicts of interest).

I disagree with you on several points.

The most important thing to note here is that, if you dig through the various long reports, the tradeoff is:

  1. With $7800 you can save the life of a child, or
  2. If you grant HLI's assumptions regarding costs (and I'm a bit skeptical even there), you can give a multi-week group therapy to 60 people for that same cost (I think 12 sessions of 90 min).

Which is better? Well, right off the bat, if you think mothers would value their children at 60x what they value the therapy sessions, you've already lost.

Of course, the child's life also matters, not just the mother's happiness. But HLI has a range of "assumptions" regarding how good a life is, and in many of these assumptions the life of the child is indeed fairly value-less compared to benefits in the welfare of the mother (because life is suffering and death is OK, basically).

All this is obfuscated under various levels of analysis. Moreover, in HLI's median assumption, not only is the therapy more effective, it is 5x more effective. They are saying: the number of group therapies that equal the averted death of a child is not 60, but rather, 12.

To me that's broken-thermometer level.

I know the EA community is full of broken thermometers, and it's actually one of the reasons I do not like the community. One of my main criticisms of EA is, indeed, "you're taking absurd numbers (generated by authors motivated to push their own charities/goals) at face value". This also happens with animal welfare: there's this long report and 10-part forum series evaluating animals' welfare ranges, and it concludes that 1 human has the welfare range of (checks notes) 14 bees. Then others take that at face value and act as if a couple of beehives or shrimp farms are as important as a human city.

I am skeptical of any argument that would significantly incentivize organizations to keep their analyses close to the chest.

This is not the first time I've had this argument made to me when I criticize an EA charity. It seems almost like the default fallback. I think EA has the opposite problem, however: nobody ever dares to say the emperor has no clothes, and everyone goes around pretending 1 human is worth 14 bees and a group therapy session increases welfare by more than the death of your child decreases it.

Yes. There is a large range of such numbers. I am not sure of the right tradeoff. I would intuitively expect a billion therapy sessions to be an overestimate (i.e. clearly more valuable than the life of a child), but I didn't do any calculations. A thousand seems like an underestimate, but again who knows (I didn't do any calculations). HLI is claiming (checks notes) ~12.

To flip the question: Do you think there's a number you would reject for how many people treated with psychotherapy would be worth the death of one child, even if some seemingly-fancy analysis based on survey data backed it up? Do you ever look at the results of an analysis and go "this must be wrong," or is that just something the community refuses to do on principle?

Sorry for confusing you for Joel!

Personally -- I am skeptical that the positive effect of therapy exceeds the negative effect of losing one's young child on a parent's own well-being.

It's good to hear you say this.

In any event -- given that SM can deliver many courses of therapy with the resources AMF needs to save one child, the two figures don't need to be close

Definitely true. But if a source (like a specific person or survey) gives me absurd numbers, it is a reason to dismiss it entirely. For example, if my thermometer tells me it's 1000 degrees in my house, I'm going to throw it out. I'm not going to say "even if you merely believe it's 90 degrees we should turn on the AC". The exaggerated claim is disqualifying; it decreases the evidentiary value of the thermometer's reading to zero.

When someone tells me that group therapy is more beneficial to the mother's happiness than saving her child from death, I don't need to listen to that person anymore. And if it's a survey that tells me this, throw out the survey. If it's some fancy academic methods and RCTs, the interesting question is where they went wrong, and someone should definitely investigate that, but at no point should people take it seriously.

By all means, let's investigate how the thermometer possibly gave a reading of 1000 degrees. But until we diagnose the issue, it is NOT a good idea to use "1000 degrees in the house" in any decision-making process. Anyone who uses "it's 1000 degrees in this room" as a placeholder value for making EA decisions is, in my view, someone who should never be trusted with any levers of power, as they cannot spot obvious errors that are staring them in the face.

Thanks for your response.

If the mother would rather have her child alive, then under what definition of happiness/utility do you conclude she would be happier with her child dead (but getting therapy)? I understand you're trying to factor out the utility loss of the child; so am I. But just from the mother's perspective alone: she prefers scenario X to scenario Y, and you're saying it doesn't count for some reason? I don't get it.

I think you're double-subtracting the utility of the child: you're saying, let's factor it out by not asking the child his preference, and ALSO let's ADDITIONALLY factor it out by not letting the mother be sad about the child not getting his preference. But the latter is a fact about the mother's happiness, not the child's.

Second, the hypothetical mother would have to live with the guilt of knowing she could have saved her child but chose something for herself.

Let's add memory loss to the scenario, so she doesn't remember making the decision.

Finally, GiveWell-type recommendations often would fail the same sort of test. Many beneficiaries would choose receiving $8X (where X = bednet cost) over receiving a bednet, even where GiveWell thinks they would be better off with the latter.

Yes, and GiveWell is very clear about this and most donors bite the bullet (people make irrational decisions with regards to small risks of death, and also, betnets have positive externalities to the rest of the community). Do you bite the bullet that says "the mother doesn't know enough about her own happiness; she'd be happier with therapy than with a living child"?

 

Finally, I do hope you'll answer regarding whether you have children. Thanks again.

I appreciate your candid response. To clarify further: suppose you give a mother a choice between "your child dies now (age 5), but you get group therapy" and "your child dies in 60 years (age 65), but no group therapy". Which do you think she will choose?

Also, if you don't mind answering: do you have children? (I have a hypothesis that EA values are distorted by the lack of parents in the community; I don't know how to test this hypothesis. I hope my question does not come off as rude.)

LGS
1y21
5
0

Zooming out a little: is it your view that group therapy increases happiness by more than the death of your child decreases it? (GiveWell is saying that this is what your analysis implies.)

Oh, I didn't mean for you to make the decision in the middle of pain!

The scenario is: first, you experience 5 minutes of pain. Then take a 1 hour break. Then decide: 1 hour pain, or dead child. No changing your mind once you've decided.

The possibility that pain may twist your brain into taking actions you do not endorse when not under duress is interesting, but not particularly morally relevant. We usually care about informed decisions not made under duress.

First of all, I  doubt it. People don't even commit suicide to avoid 1 hour pain (usually the suicide-due-to-pain people are those who don't anticipate ever getting better).

Second, even assuming you're right, what happens in that world is that the emotional pain still trumps the actual pain. Like, if people prefer their own pain to their child's death, then the death of a child is worse than the pain of a hermit (someone with no family). It's not necessarily worse than the pain for a child... but only if that child has parents. Is that your model? It has important implications.

Load more