I decided to write a list of posts I’d like to write, on the hypothesis that perhaps I can crowdsource interest or pre-emptively get people’s takes on goodness to a) prioritize my writings better and b) to develop better intuitions for which systems/processes can preemptively determine what research/writings are valuable. Note that I’m currently quite unlikely to write >2 of these posts unless I get substantive feedback otherwise.
Unless explicitly stated otherwise, names/links/quotes of other people are referenced for partial attribution. They should not be construed as endorsements by those people, and on base rates it should be reasonable to assume that in this post I misrepresented someone at least once.
This post is written in my own capacity, and does not represent the position or output of my employer (RP) or past employers.
After the Apocalypse: Why Personal Survival in GCR Scenarios Should Be Unusually High Priority For Altruists/Consequentialists
- I think if your ensemble of beliefs include substantial credence in both urgent and patient longtermism, this should lead to a fairly high credence in the importance of the survival and proliferation of certain key ideas
- One way to ensure the survival of those ideas is through the survival of individuals with those ideas
- This is especially relevant if you have high credence in the probability of large-scale non-existential GCRs, particularly ones with a population fatality rate of closer to 99.99% than 50%.
- An alternative way to frame this is to consider the analogy to the Hinge of History hypothesis.
- All else equal, individuals are more likely to live at the hinge of history if there’s 6.5 million other humans than 6.5 billion.
Shelters MVP: A Concrete Proposal to Robustly Mitigate Global Catastrophes
- I haven’t yet seen shelter designs with all the desiderata that I’d like to see.
- I’ve seen preliminary discussions that analyzed shelter intervention viability/cost-effectiveness in the abstract, but not stuff that looked at a specific well-defined target to aim for (or decide it’s not worth doing).
- I think that while reducing GC risks is of utmost importance, for longtermist goals, a potentially important part of the web-of-protection/defense-in-depth story should involve substantial work on catastrophe mitigations.
- I claim for a number of GCRs (with the notable exception of AI or other agent-heavy risks), certain shelter designs should robustly reduce the overall harm.
- I suspect (without yet having done the numbers) that this may not end up being worthwhile to implement at the current margin, however it is still worthwhile to have a blueprint ready as a robust baseline for GCR mitigation, so we have a direct comparison class/bar for what marginal Open Phil/EA longtermist dollars necessarily must beat.
- (I’m currently more excited about this as a baseline for “last longtermist dollar,” akin to GiveDirectly for global poverty, than the clean energy funding that others in EA propose).
Moral Circle Expansion: Is it Highly Overrated?
- Many EAs (myself somewhat included) believe in some form of moral circle expansion as something that a) descriptively happened/is happening, b) is worthwhile to have, and c) is plausibly worth EA effort to ensure happening.
- I think I (used to?) believe in some version of this, at the risk of simplifying too much:
- The story of moral progress is in large part a story of the expansion of who we choose to care about. From only individuals to family members, trible, nation, race, etc, and expanding outwards to people of other races, locations, sexualities, and mental architectures. Future moral progress may come from us caring about more entities worthy of moral consideration (“moral circle expansion”). However, this expansion of concern is not automatic, and may require dedicated effort from EAs.
- However, there’s a number of pertinent critiques on this from a number of different angles, that I think is a) AFAICT not collected in one place and b) underexplored.:
- Critique from history of moral psychology:
- https://www.gwern.net/The-Narrowing-Circle
- Essentially, many things that used to be in our moral circle (ancestors, plants, spirits, even animals) are no longer in modern WEIRD moral circles.
- Thus, to the extent that the shifting moral circle/sphere of concern is good, this is less due to an overall expansion of concern, and more due to us having precisely more accurate understandings of whom/what to care about/for.
- Critique from political history of expanding rights:
- https://forum.effectivealtruism.org/posts/LzjqLWN2vBmJwH38W/short-version-what-helped-the-voiceless-historical-case
- Essentially, the vast majority of the time, inclusivity shifts come from
- This being in the self-interest of political elites
- Increasing political power of the formerly voiceless
- This contrasts with MCE stories where the voiceless benefits from the beneficence/MCE of elites
- Critique from empirical moral psychology: Do people descriptively actually have a real moral circle?
- For example, who or what people evince concern for may be a highly variable, contextually specific unstable thing, rather than a single circle that we can general expand
- Plausible that broad moral circles matter a lot for people who are (intuitive) consequentialists, and not many others, suggesting a ceiling of value in MCE
- Evaluative critique from empirical moral psychology: Is MCE (in the relevant dimensions) even a net good?
- This is skipped in lots of conversations about MCE.
- There’s at least two reasons to think otherwise:
- Moral purity frequently linked to bad outcomes
- Moral outrage, etc, not known to be unusually truth-tracking
- Morality-based reasoning often leads to black/white thinking, large-scale harms, etc.
- Increasing the moral circle of concern in practice may also lead to increasing the moral circle of judgment
- Intuitively (to consequentialist/EA types), moral patients are not necessarily moral agents
- However most people don’t believe this in practice.
- Moral judgment may lead us to not just be more concerned along the “care” dimension, but also more willing to punish along the “retributive” dimension.
- Moral purity frequently linked to bad outcomes
- Critique from history of moral psychology:
- (I got a lot of ideas about this from discussions with my coworker David Moss, particularly the empirical moral psychology stuff. The ideas are mostly not original to me).
- A version of moral circle expansion can plausibly be rescued from all these critiques, but it may end up looking very different
- Even so, it might still end up being fake/not worthwhile for above or other reasons.
How to Get Good At Forecasting: Lessons from Interviews With Over 100 Top Forecasters
- There appears to be broad interest within EA, particularly the EA ∩ forecasting subcluster, at getting much better at forecasting.
- I’m interested in writing a list of ideas on how people can get much better at forecasting, but 2 major problems:
- 1. I’m not the best forecaster in the world
- 2. my own style of reasoning/forecasting might be sufficiently idiosyncratic that generalizing from it is less useful than averaging across lots of ideas.
- My initial solution to this was to interview a lot of forecasters.
- But then I realized that I know people who have interviewed far more top forecasters than I have. So a potentially better process is to interview them, to leverage more aggregated wisdom by aggregating aggregators, rather than doing the aggregating myself.
- I’m also conceptually interested in this idea because for various reasons, including research amplification and AI safety, solid ways to do meta-aggregation in various domains seems underexplored and valuable.
- The structure of the post would look like a list of ideas sourced from interviews, maybe including anecdotes or worked-out-processes for easier digestion.
What Are Good Humanities Research Ideas for Longtermism?
- As a continuation/ extension of this post on history research ideas by my coworker MichaelA, I’m interested in tabulating a list of humanities research ideas that are potentially very important.
- Some of these grew out of conversations with MichaelA, Howie Lempel, Daniel Filan and others.
- Some things I’m particularly interested in:
- Humanities that are adjacent to fields that we already care about
- Eg anthropology of people doing work in scientific labs/critical institutions
- Comparative lit studies of whether ambitious science fiction (might not be well operationalized) is correlated with ambitious science fact.
- General question of utopianism/definite optimism/futuristic inclinations of cultures and microcultures.
- Can be studied from various social science and humanities angles
- If we have specific tales we’d like to see (e.g. something that makes longtermism or consequentialism S1 visceral) what are the insights we can learn from past work to scope this out in advance?
- Humanities that are adjacent to fields that we already care about
- I’m also interested in starting the seed of/encouraging someone better positioned than me to develop a broader framework/ontology for assessing which humanities research funding/work in general, or specific humanities research projects in particular, are worth devoting marginal $s or researcher-hours into.
Acknowledgements
Thanks to conversations with Jake Mckinnon, Adam Gleave, Amanda Ngo, David Moss, Michael Aird, Peter Hurford, Dave Bernard and I’m sure many others for conversations that helped inspire or crystallize some of these ideas. Thanks also to Salius Simcikas, David Moss, Janique Belman, and especially Michael Aird for many constructive comments on an earlier draft of this post.
All mistakes and inaccuracies are, naturally, the fault of a) the boundary conditions of the universe and b) the Big Bang. Please do comment if/when you identify mistakes, so I can sigh resignedly at the predetermined nature of such mistakes.
Thanks for this. I'd rate the ideas on Moral Circle Expansion & Good Humanities Research first, because I'm quite uncertain about them.
I liked the idea about Forecasting, too - I'd like to see what comes from this.
Though I would like to see some ITN assessment of After the Apocalipse & Shelter MVP, my priors are that these are not very cost-effective - at least for individuals. It seems usually better to invest my resources in my health and sanity than in acquiring survivalist skills or equipment; maybe some "cheap survivalist tips" are cost-effective ("you can't have too much canned food"), but even so it'd likely be more cost effective to invest in a group that can survive a catastrophe and restart civilization (and then have contact with EA ideas) than in myself - afterall, this is a commons problem.
This is such a great idea!
I have a laundry list of blog posts I'd like to write as well and I imagine many others do too. Would it maybe make sense to make a monthly mega thread where people can share their blog post ideas?
Wrt. to your ideas, I would be super excited to read "How to get good at forecasting"!
It would be great to have more people posting lists of blog posts ideas - people could coordinate, maybe even collaborate.
Just my two cents, but in my view, these are how valuable these forum posts would be:
I think I roughly agree with your ranking Brian!
Speaking for myself here, I'd be very interested in reading a more in-depth critique of Moral Circle Expansion, and I'm open to changing my mind on that topic. Although I'm perhaps most interested in predictions of specific questions, like whether our descendants will care about the welfare of invertebrates and other wild animals, and (relatedly) whether sentience is likely to be the main determinant of moral concern in the future.
(Thanks Linch for a great post!)
Update: #5 has since been researched and explored by my colleague Lizka.
I think this is interesting in of itself but also related to something I haven't seen explored much in general: How important is it that EA ideas exist a long time? How important is it that they are widely held? How would we package an idea to propagate through time? How could we learn from religions?
More directly to the topic: is this a point in favor of EAs forming a hub in New Zealand?
I've seen some discussion around this topic but I feel like it hasn't been satisfyingly motivated. For personal reasons I'd like to hear more about this.
I love the idea of this post! I'd be extremely excited to read the forecasting post and I think making that would be highly valuable. I'm not that interested in the others
Btw, a way you could get more feedback on which of these posts readers would like you to write is to place each of these topics as a comment on this post, and let people upvote or strong upvote the ones they are interested in reading or would think are more valuable.
You can also have a separate comment that people could downvote, so they could offset the added karma you gain through using comments as a poll. I say this because some forum users think that using comments as a poll is an unfair way to gain karma, which is somewhat true.
The ideas that most interest me here are shelter MVPs and interviews with forecasters -- particularly the latter, since you have enough forecasting ability and experience to filter and contextualize the collected interviews, and most other people do not.
None of these seem like bad ideas, but the above two seem most actionable, and the MVP post in particular seems like it could draw a lot of useful commentary (because your ideas are great, or because someone thinks you are concretely "wrong on the internet", or both).