EA storytelling
Research That Can Help Us Improve, Values and Reflective Processes, Effective Altruism
The stronger the stories that EA tells are, the more people will be convinced to do something about EA in their own lives. We’re interested in funding people with a proven track record in storytelling, including generating viral content, to create EA stories that could reach millions of people.
(Potentially extends existing Project Ideas ‘A fund for movies and documentaries’ and 'Critiquing our approach'.)
Project ideas from this page that are relevant to this idea:
EA-themed Superhero Graphic Novel / Shounen Anime / K Drama (jknowak)
Values and Reflective Processes
Thousands of institutions have potential to do more good, but are hampered by dysfunctions such as excess bureaucracy, internal politics, and misalignment of the values they and their employees hold with their actions. Often these dysfunctions are well-known by their employees, but still persist.
We're excited to fund proposals to study institutional dysfunction and investigate solutions, as well as tools to monitor dysfunctions that lead to poor EA outcomes, and to empower employees to solve them.
--
Project ideas from this page that are relevant to this idea:
Longtermist democracy / institutional quality index (evelynciara)
Longtermism Policy Lab (JBPDavies)
A global observatory for institutional improvement opportunities (IanDavidMoss)
Platform Democracy Institutions (aviv)
Scaling successful policies (SjirH)
Representation of future generations within major institutions (SjirH)
--
Existing example of work in this space: Joe Edelman's 'Values-Based Social Design'.
(This idea is potentially related to existing Project Idea ‘Institutional experimentation’.)
Research into why people don't like EA
Research That Can Help Us Improve
Many people have heard of EA and weren’t convinced. We want to understand why, so that we can find approaches to convince them. If we can win more people over to EA, we can directly increase the impact that EA has in the world.
We’re excited to fund proposals to research why people do and don’t like EA, and the approaches that are most effective in winning people over to EA.
(Potentially extends existing Project Idea 'Critiquing our approach'.)
If the info is useless, then for sure the opportunity cost is too high. There's an info problem there, which is establishing the usefulness of the info prior to paying for it. Might have to involve some 'weak' breach of NDA, e.g. share the info or some meta-info with trusted experts (ironically, maybe under yet another NDA for protection!).
In this case, for instance, I think it would be useful if Bengio, Hinton, Russell (endorsers of the letter) were informally told about the info, at least in broad strokes, and could then make a pitch to someone to put up funds for full disclosure. Or, they could say 'Na, it's not worth it' - but either way, there's a system in place.
I'd also hope a person would sacrifice personal wealth, but don't think hope is a good strategy and can see how more disclosure would happen with more funds to protect whistleblowers prior to making the decision to blow the whistle. I'd think banking on financial support after the fact is too scary for most people.
Agree re: the incentive misalignment, and that's a problem. Wonder if some kind of contract between whistleblower and funder can help there.