This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors
Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.
[crossposted from my blog; some reflections on developing different problem-solving tools]
When all you have is a hammer, everything sure does start to look like a nail. This is not a good thing.
I've spent a lot of my life variously
1) Falling in love with physics and physics fundamentalism (the idea that physics is the "building block" of our reality)
2) Training to "think like a physicist"
3) Getting sidetracked by how "thinking like a physicist" interacts with how real people actually do physics in practice
4) Learning a bunch of different skills to tackle interdisciplinary research questions
5) Using those skills to learn more about how different people approach different problems
While doing this, I've come to think that identity formation - especially identity formation as an academic - is about learning how to identify different phenomena in the world as nails (problems with specific characteristics) and how to apply hammers (disciplinary techniques) to those nails.
As long as you're just using your hammer on a thing that you're pretty sure is a nail, this works well. Physics-shaped hammers are great for physics-shaped nails; sociology-shaped hammers are great for sociology-shaped nails; history-shaped hammers are great for history-shaped nails.
The problem with this system is that experts only have hammers in their toolboxes, and not everything in the world is a nail. The desire to make everything into one kind of nail, where one kind of hammer can be applied to every problem, leads to physics envy, to junk science, to junk policy, to real harm. The desire to make everything into one kind of nail also makes it harder for us to tackle interdisciplinary problems - ones where lots of different kinds of expertise are required. If we can't see and understand every dimension of a problem, we haven't a hope in hell of solving it.
The biggest problems in the world today - climate breakdown, pandemic prevention, public health - are wicked problems, ones that
Just a prompt to say that if you've been kicking around an idea of possible relevance to the essay competition on the automation of wisdom and philosophy, now might be the moment to consider writing it up -- entries are due in three weeks.
American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline:
https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/
https://www.apaonline.org/page/ai2050
https://ai2050.schmidtsciences.org/hard-problems/
Contra hard moral anti-realism: a rough sequence of claims
Epistemic and provenance note: This post should not be taken as an attempt at a complete refutation of moral anti-realism, but rather as a set of observations and intuitions that may or may not give one pause as to the wisdom of taking a hard moral anti-realist stance. I may clean it up to construct a more formal argument in the future. I wrote it on a whim as a Telegram message, in direct response to the claim
> “you can't find "values" in reality”.
Yet, you can find valence in your own experiences (that is, you just know from direct experience whether you like the sensations you are experiencing or not), and you can assume other people are likely to have a similar enough stimulus-valence mapping. (Example: I'm willing to bet 2k USD on my part against a single dollar yours that that if I waterboard you, you'll want to stop before 3 minutes have passed.)[1]
However, since we humans are bounded imperfect rationalists, trying to explicitly optimize valence is often a dumb strategy. Evolution has made us not into fitness-maximizers, nor valence-maximizers, but adaptation-executers.
"values" originate as (thus are) reifications of heuristics that reliably increase long term valence in the real world (subject to memetic selection pressures, among them social desirability of utterances, adaptativeness of behavioral effects, etc.)
If you find yourself terminally valuing something that is not someone's experienced valence, then either one of these propositions is likely true:
* A nonsentient process has at some point had write access to your values.
* What you value is a means to improving somebody's experienced valence, and so are you now.
crossposted from lesswrong
1. ^
In retrospect, making this proposition was a bit crass on my part.
My current practical ethics
The question often comes up how we should make decisions under epistemic uncertainty and normative diversity of opinion. Since I need to make such decisions every day, I had to develop a personal system, however inchoative, to assist me.
A concrete (or granite) pyramid
My personal system can be thought of like a pyramid.
1. At the top sits some sort of measurement of success. It's highly abstract and impractical. Let's call it the axiology. This is really a collection of all axiologies I relate to, including the amount of frustrated preferences and suffering across our world history. This also deals with hairy questions such as how to weigh Everett branches morally and infinite ethics.
2. Below that sits a kind of mission statement. Let's call it the ethical theory. It's just as abstract, but it is opinionated about the direction in which to push our world history. For example, it may desire a reduction in suffering, but for others this floor needn't be consequentialist in flavor.
3. Both of these abstract floors of the pyramid are held up by a mess of principles and heuristics at the ground floor level to guide the actual implementation.
The ground floor
The ground floor of principles and heuristics is really the most interesting part for anyone who has to act in the world, so I won't further explain the top two floors.
The principles and heuristics should be expected to be messy. That is, I think, because they are by necessity the result of an intersubjective process of negotiation and moral trade (positive-sum compromise) with all the other agents and their preferences. (This should probably include acausal moral trades like Evidential Cooperation in Large Worlds.)
It should also be expected to be messy because these principles and heuristics have to satisfy all sorts of awkward criteria:
1. They have to inspire cooperation or at least not generate overwhelming opposition.
2. They have to be easily communicable so peop
Are your values about the world, or the effects of your actions on the world?
An agent who values the world will want to effect the world, of course. These have no difference in effect if they're both linear, but if they're concave...
Then there is a difference.[1]
If an agent has a concave value function which they use to pick each individual action: √L where L is the amount of lives saved by the action, then that agent would prefer a 90% chance of saving 1 life (for √1 × .9 = .9 utility), over a 50% chance of saving 3 lives (for √3 × .5 = .87 utility). The agent would have this preference each time they were offered the choice.
This would be odd to me, partly because it would imply that if they were presented this choice enough times, they will appear to overall prefer an x% chance at saving n lives to an x% chance of saving >n lives. (Or rather, the probability distribution version instead of discrete version of that statement)
For example, after taking the first option 10 times, the probability distribution over amount of lives saved looks like this (on the left side). If they had instead took the second option 10 times, it would look like this (right side)
(Note: Claude 3.5 Sonnet wrote the code to display this and to calculate the expected utility, so I'm not certain it's correct. Calculation output and code in footnote[2])
Now if we prompted the agent to choose between each of these probability distributions, they would assign an average utility of 3.00 to the one on the left, and 3.82 to the one on the right, which from the outside looks like contradicting their earlier sequence of choices.[3]
We can generalize this beyond this example to say that, in situations like this, the agent's best action is to precommit to take the second option repeatedly.[4]
We can also generalize further and say that for an agent with a concave function used to pick individual actions, the initial action which scores the highest would be to self-modify into (or commit to