My name is pronounced somewhat like 'yuh-roon'.
he/him
We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment.
This may sound like a lot, but I think it's likely that 4 years from now 20% of currently secured compute will just be a tiny fraction of what they'll have secured by then.
Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness.
Under classic utilitarianism, the only thing that matters is hedonic experiences.
People with a person affecting view object to this, but that view comes with issues of its own.
To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC)
This is a way to bridge the gap between the person-affecting view and 'personal identity doesn't exist' view and tries to solve some population ethics issues.
I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn't be stopped/destroyed. Creating a new stream of consciousness isn't intrinsically valuable (except for the utility it creates).
A SOC isn't infinitely valuable. Here are some exceptions:
1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible
2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia)
3. Ending the SOC will create at least 10x its utility (or a different critical level)
I believe this is compatible with the non-identity problem (it's still unclear who's you if you're duplicated or if you're 20 years older).
But I've never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended).
So generally this means: Making current population happier (or making sure few people die) > increasing amount of people
Future people don't have SOCs as they don't exist yet, but it's still important to make their lives go well.
Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure wise), there still seems to be something of incredible value lost.
Still, if the simulation gets replaced by a sufficiently more valuable one it could still be good, hence exception number 3. The exception also makes sure you can kill someone to prevent future people never coming into existence (for example: someone is about to spread a virus that makes everyone incapable of reproducing).
I don't think adding this rule changes the EV calculations regarding increasing pain/pleasure of present and future beings when it doesn't involve ending streams of consciousness (I could be wrong though).
This rule doesn't solve the repugnant conclusion, but I don't think it's repugnant in the first place. I think my bar for a life worth living seems higher than those of other people.
How I came to this: I really liked this forum post arguing "Making current population happier > increasing amount of people". But if I agree it means there's something of value besides pure pleasure/pain. This is my attempt at finding what it is.
One possible major objection: If you're giving birth you're essentially causing a new SOC to be ended (as long as aging isn't solved). Perhaps this is solved by saying you can't directly end a stream of consciousness, but you can ignore second/third order effects (though I'm not sure how to make sense of that).
I'd love to hear your thoughts on these ideas. I don't think these thoughts are good enough or polished enough to deserve a full forum post. I wouldn't be surprised if the first comment under this shortform would completely shatter this idea.
I'd personally much rather have agree/disagree on posts than these reactions, if we had to pick. I'm not sure if these reactions add any useful information. But I'm happy to see you're trying to make it simpler than at LessWrong. Curious to hear what the other reasons are for no agree/disagree on posts. I frequently upvote posts I disagree with and would like to express that disagreement without writing a comment. Maybe agree/disagrees can be optionally set by the authors of the post?
Wow! Not sure what to conclude from that.