The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas.
* (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
* (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much.
* (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
Someone pinged me a message on here asking about how to donate to tackle child sexual abuse. I'm copying my thoughts here.
I haven't done a careful review on this, but here's a few quick comments:
* Overall, I don't know of any charity which does interventions tackling child sexual abuse, and which I know to have a robust evidence-and-impact mindset.
* Overall, I have the impression that people who have suffered from child sexual abuse (hereafter CSA) can suffer greatly, and tackling this is intractable. My confidence on this is medium -- I've spoken with enough people to be confident that it's true at least some of the time, but I'm not clear on the academic evidence.
* This seems to point in the direction of prevention instead.
* There are interventions which aim to support children to avoid being abused. I haven't seen the evidence on this (and suspect that high quality evidence doesn't exist). If I were to guess, I would guess that the best interventions probably do have some impact, but that impact is limited.
* To expand on this: my intuition says that the less able the child is to protect themselves, the more damage the CSA does. I.e. we could probably help a confident 15-year old avoid being abused, however that child might get different -- and, I suspect, on average less bad -- consequences than a 5 year old; but helping the 5 year old might be very intractable.
* This suggests that work to support the abuser may be more effective.
* It's likely also more neglected, since donors are typically more attracted to helping a victim than a perpetrator.
* For at least some paedophiles, although they have sexual urges toward children, they also have a strong desire to avoid acting on them, so operating cooperatively with them could be somewhat more tractable.
* Unfortunately, I don't know of any org which does work in this area, and which has a strong evidence culture. Here are some examples:
* I considered volunteering with Circles many years
A small exercise to inspire empathy/gratitude for people who grew up with access to healthcare:
If you'd lived 150 years ago, what might you have died of as a child?
I got pneumonia when I was four and it probably would have killed me without modern medicine.
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
1. People might really be interested in this… it’s super-compelling (a bit click-baity, maybe, but the payoff is not click bait)!
2. May make some news headlines too (it’s an “easy story” for media people, asks a question people can engage with, etc. … ’how much does it cost to save a life? find out after the break!)
3. if people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this conception… to help us build a reality-based impact-based evidence-based community and society of donors
4. similarly, it could get people thinking about ‘how to really measure impact’ --> consider EA-aligned evaluations more seriously
While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive in the way I suggest above, and I doubt they market it heavily.
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
Tags/backlinks (~testing out this new feature)
@GiveWell @Giving What We Can
Projects I'd like to see
EA Projects I'd Like to See
Idea: Curated database of quick-win tangible, attributable projects
Hi everyone, I am Jia, co-founder of Shamiri Health, an affordable mental health start-up in Kenya. I am thinking of writing up something on the DALY cost-effectiveness of investing in our company. I am very new to the community, and I wonder if I can solicit some suggestions on what is a good framework to use to evaluate the cost-effectiveness of impact investment into Healthcare companies.
I think there could be two ways to go about this: 1) take an investment amount, and using some cashflow modeling, we can figure out how many users we can reach with that investment and calculate based on the largest user base we can reach, with the investment amount; or 2) we can do a comparative analysis with another more mature company in a different country, and use its % of population reach as our "terminal impact reach". Then, use that terminal user base as the base of the calculation.
The first approach is no doubt more conservative, but the latter, in my opinion, is the true impact counterfactual. Without the investment, we will likely not be able to raise enough funding since our TAM is not particularly attractive for non-impact investors. The challenge to using the latter is the "likelihood of success" of us carrying out the plan to reach our terminal user base. How would you go about this "likelihood number"? I would think it varies case by case, and one should factor in the team, the business model, the user goal, and the market, which is closer to venture capital's model of evaluating companies. What is the average number for impact ventures to succeed?
TLDR:
1. What is the counterfactual of impact investing? The immediate DALY that could be averted or the terminal DALY that could be averted?
2. What is the average success rate of impact healthcare ventures to reach their impact goal?
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are promising from neartermist and longtermist perspectives overlap a lot, we tend to assume they don't overlap at all, because it's more surprising if the top longtermist causes are all different from the top neartermist ones. If the cost-effectiveness of different causes according to neartermism and longtermism are independent from one another (or at least somewhat positively correlated), I'd expect at least some causes to be valuable according to both ethical frameworks.
I've noticed this in my own thinking, and I suspect that this is a common pattern among EA decision makers; for example, Open Phil's "Longtermism" and "Global Health and Wellbeing" grantmaking portfolios don't seem to overlap.
Consider global health and poverty. These are usually considered "neartermist" causes, but we can also tell a just-so story about how global development interventions such as cash transfers might also be valuable from the perspective of longtermism:
* People in extreme poverty who receive cash transfers often spend the money on investments as well as consumption. For example, a study by GiveDirectly found that people who received cash transfers owned 40% more durable goods (assets) than the control group. Also, anecdotes show that cash transfer recipients often spend their funds on education for their kids (a type of human capital investment), starting new businesses, building infrastructure for their communities, and h
Someone accidentally donated $15,000 instead of $150 to their neighbour's charity in Bangladesh. Before they could get a refund they were inundated with pictures and videos from the grateful recipients.
In addition to then donating $1,500 rather than the $150 as originally planned, they also told the story of their blunder on reddit, which went viral and caused ~3000 people to donate ~$100,000
Warm fuzzies galore
Humans have the right to freedom of speech, of movement and association. Children are humans. Children in almost all countries on the Earth are required by law to go to school for several years. In some countries (i.e. the US, European countries, etc.) children must stay in class in specific periods of time, they must move in the ways that their teachers approve of, they must not talk with others for some periods of time, and if they don't answer the test with the correct answers they are downgraded, which can affect their future lifes. Sometimes if they refuse to go to school for long enough they could be sent to a juvenile detention center, a psychiatric institution or given medications.
I think that compulsory schooling violates children's human rights to freedom of speech, movement and/or association. I think that children need to have the right to not go to schools and opportunities for play and exploration that are not compulsory.
What do you think of this? Am I right, or wrong on some points? Even if you think that this isn't a pressing issue, what ideas do you have for ameliorating this issue?