My favorite jargony phrase of the ~week is "missing mood."[1]
How I've been using it:[2]
If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood” is a signal.
Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.
Examples
1. Immigration restrictions
An example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.
2. Long content
The example from Ben — a simplified sketch of our conversation:
- Me: How seriously do you hold your belief that “more people should have short attention spans?” And that long content is bad?
- Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)
(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary” style: writing that respects your reader’s time)
—
3-6. Selective spaces, transparency, cause prioritization, and slowing AI
I had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:
- Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.[3]
- I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does.[4] But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:
- When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.” When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."
- The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t have a lot of trust/familiarity with them and their thinking.[5]) (Or an alternative framing: if I’m not sad about not prioritizing transparency when decide not to go for it, I should worry that my mindset has turned into something like “why are people griping about transparency — this is my business.)
- Cause prioritization. (If you're working on civilizational resiliency and you're not feeling at least a bit sad about the fact that you can't use that time to help people struggling today, then your reasons might not be what you think.)
- Slowing down AI — I really appreciated this recent post.
- ^
H/t @Ben_West for using it in a way that made me actually pay attention to it as a useful phrase)
- ^
This is my attempt at a sketch of this phrase, but I might actually be misusing it. Please feel free to clarify or disagree. I think I'm focusing on a narrow use case in this post, but uses that are broader than this haven't properly clicked for me.
- ^
I appreciated Ruby's comment here: "I don't feel great about being the one to decide whether or not a person's post or comment or self belongs on LessWrong. I will make mistakes. But also – tradeoffs – I don't want LessWrong to get massively diluted because I wasn't willing to reject enough people."
- ^
(And sometimes it's the opposite.)
- ^
Trust/familiarity lets you have conversations that are higher context; you know that the person you’re talking to shares a lot of your values. (Beware inferential distances and illusions of transparency, though — I think it can be useful to make things explicit even when you think they might be obvious.)
In fact, when there’s some expectation of mutual trust, explicitly caveating or flagging tradeoffs might have a negative effect, too; it can make you appear defensive in a way that signals that you don’t expect the other person to trust you enough to know that you care about the relevant tradeoff. (Imagine my brother and I had an exchange where I said that I might not visit my family for my mom’s birthday, and I really stressed the fact that I care about my mom and wanted to see her. I expect that my brother would be confused that I was belaboring that point.) H/t @Clifford for this caveat.
In general I use and like this concept quite a lot, but someone else advocating it does give me the chance to float my feelings the other direction:
I think sometimes when I want to go to missing moods as a concept to explain to my interlocutor what I think is going off in our conversation, I end up feeling like I'm saying "I am demanding you are required to be sad because I would feel better if you were", which I want to be careful of imposing on people. It sometimes also feels like I'm assuming we have the same values in a way I would want to do more upfront work before calling on.
More generally, I think it's good to notice the costs of the place you're taking on some tradeoff spectrum, but also good to feel good about doing the right thing, making the right call, balancing things correctly etc.
I predict with high uncertainty that this post will have been very usefwl to me. Thanks!
Here's a potential missing mood: if you read/skim a post and you don't go "ugh that was a waste of time" or "wow that was worth reading"[1], you are failing to optimise your information diet and you aren't developing intuition for what/how to read.
This is importantly different from going "wow that was a good/impressive post". If you're just tracking how impressed you are by what you read (or how useful you predict it is for others), you could be wasting your time on stuff you already know and/or agree with. Succinctly, you need to track whether your mind has changed--track the temporal difference.
I don’t see why they would feel anguish if they don’t believe in the first place that open borders would enrich mankind and end poverty? I guess it works if they value something else, like cultural homogeneity. But even then it seems reasonable not to feel anguish about tradeoffs one has to make? Like, similarly EAs learned that it’s unwise to feel anguish about the tradeoffs of spending on yourself vs. donating, and it’s better to just once reflect on budgets you feel good about and be done with it?