The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas.
* (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
* (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much.
* (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
Offput that 80k hours advises "if you find you aren’t interested in [The Precipice: Existential Risk], we probably aren’t the best people for you to get advice from". Hoped there was more general advising beyond just those interested in existential risk
Why doesn't EA focus on equity, human rights, and opposing discrimination (as cause areas)?
KJonEA asks:
'How focused do you think EA is on topics of race and gender equity/justice, human rights, and anti-discrimination? What do you think are factors that shape the community's focus?'
In response, I ended up writing a lot of words, so I thought it was worth editing them a bit and putting them in a shortform. I've also added some 'counterpoints' that weren't in the original comment.
To lay my cards on the table: I'm a social progressive and leftist, and I think it would be cool if more EAs thought about equity, justice, human rights and discrimination - as cause areas to work in, rather than just within the EA community. (I'll call this cluster just 'equity' going forward). I also think it would be cool if left/progressive organisations had a more EA mindset sometimes. At the same time, as I hope my answers below show, I do think there are some good reasons that EAs don't prioritize equity, as well as some bad reasons.
So, why don't EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less neglected)
The social justice/progressive movement has got feminism and anti-racism pretty well covered. On the other hand, the central EA causes - global health, AI safety, existential risk, animal welfare -are comparatively neglected by other groups. So it kinda makes sense for EAs to say 'we'll let these other movements keep doing their good work on these issues, and we'll focus on these other issues that not many people care about'.
Counter-point: are other groups using the most (cost)-effective methods to achieve their goals? EAs should, of course, be epistemically modest; but it seems that (e.g.) someone who was steeped in both EA and feminism, might have some great suggestions for how to improve gender equality and women's experiences, effectively.
2. Equity work isn't cost-effective
EAs car
Someone pinged me a message on here asking about how to donate to tackle child sexual abuse. I'm copying my thoughts here.
I haven't done a careful review on this, but here's a few quick comments:
* Overall, I don't know of any charity which does interventions tackling child sexual abuse, and which I know to have a robust evidence-and-impact mindset.
* Overall, I have the impression that people who have suffered from child sexual abuse (hereafter CSA) can suffer greatly, and tackling this is intractable. My confidence on this is medium -- I've spoken with enough people to be confident that it's true at least some of the time, but I'm not clear on the academic evidence.
* This seems to point in the direction of prevention instead.
* There are interventions which aim to support children to avoid being abused. I haven't seen the evidence on this (and suspect that high quality evidence doesn't exist). If I were to guess, I would guess that the best interventions probably do have some impact, but that impact is limited.
* To expand on this: my intuition says that the less able the child is to protect themselves, the more damage the CSA does. I.e. we could probably help a confident 15-year old avoid being abused, however that child might get different -- and, I suspect, on average less bad -- consequences than a 5 year old; but helping the 5 year old might be very intractable.
* This suggests that work to support the abuser may be more effective.
* It's likely also more neglected, since donors are typically more attracted to helping a victim than a perpetrator.
* For at least some paedophiles, although they have sexual urges toward children, they also have a strong desire to avoid acting on them, so operating cooperatively with them could be somewhat more tractable.
* Unfortunately, I don't know of any org which does work in this area, and which has a strong evidence culture. Here are some examples:
* I considered volunteering with Circles many years
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are promising from neartermist and longtermist perspectives overlap a lot, we tend to assume they don't overlap at all, because it's more surprising if the top longtermist causes are all different from the top neartermist ones. If the cost-effectiveness of different causes according to neartermism and longtermism are independent from one another (or at least somewhat positively correlated), I'd expect at least some causes to be valuable according to both ethical frameworks.
I've noticed this in my own thinking, and I suspect that this is a common pattern among EA decision makers; for example, Open Phil's "Longtermism" and "Global Health and Wellbeing" grantmaking portfolios don't seem to overlap.
Consider global health and poverty. These are usually considered "neartermist" causes, but we can also tell a just-so story about how global development interventions such as cash transfers might also be valuable from the perspective of longtermism:
* People in extreme poverty who receive cash transfers often spend the money on investments as well as consumption. For example, a study by GiveDirectly found that people who received cash transfers owned 40% more durable goods (assets) than the control group. Also, anecdotes show that cash transfer recipients often spend their funds on education for their kids (a type of human capital investment), starting new businesses, building infrastructure for their communities, and h
Assuming that interventions have log-normally distributed impact, compromising on interventions for the sake of public perception is not worth it unless it brings in exponentially more people.
I wonder if anyone has moved from longtermist cause areas to neartermist cause areas. I was prompted by reading the recent Carlsmith piece and Julia Wise's Messy personal stuff that affected my cause prioritization.
Sanity check? Help please, thank you.
My mind is blown by what's going on right now and we definitely need to change.
At the same time, I can't help but think, there are only like, roughly guessing 70-90 people in CEA? And there's like 10k-15k in the community? Correct me if I'm wrong.
*(Added this part to clarify that I am not saying how we should ask CEA to function at all. This shortform is to ask around so I can sort out my own rationality)
In my head, there are questions like:
1. How are CEA going to implement all of the changes we need?
2. Even if they did, will they have the time and experience to do it right?
3. At this point, do we really want CEA to do this?
4. Should CEA even do this?
5. Would we prefer a separate org/service that prevents these and work closely with CEA instead? Minimize conflict of interests?
6. Community health, a more thorough case as a cause area to help the longevity of the community/movement?
7. Am I just super biased hence why I think this?
8. Should I post this as a question instead? (I'm afraid I'll be taking away attention or look like I'm collecting karma)
I feel like I don't want to put so much of what we want to change on CEA. Or even the leaders/seniors/core, whatever you call it. I feel like it's better if even more experienced experts/orgs/services/consultants lead the changes we need instead (I don't know if there's a suitable one though). At the same time, I feel like I'm just biased. I don't want to continue feeling like I'm gaslighting myself from both sides so I would highly appreciate it if you gave more concrete reasons for/against/something else entirely. Thank you.