Jim Buhler

Researcher (Space ethics, AI governance, GPR) @ Independent
462 karmaJoined Working (0-5 years)Toulouse, France
www.jimbuhler.site

Bio

Participation
4

www.jimbuhler.site

Sequences
1

What values will control the Future?

Comments
54

Topic contributions
4

Also +1 David Thorstad, assuming we are interested in the best critiques of longtermism/X-risk reduction existing out there. I don't see anyone remotely as appropriate as him on the topic.

over 16% of people agree or strongly agree that they “would like to make some people suffer even if it meant that I would go to hell with them”

This and the other selected findings blew my mind. This is so concerning...

Thanks for this very comprehensive overview! :)

Nice, thanks for sharing, I'll actually give you a different answer than last time after thinking about this a bit more (and maybe understanding your questions better). :)

> Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?

Not sure that's what you meant, but I don't think the effects of these decay in the sense that they have big short-term impact and negligible longterm impact (this is known as the "ripple in a pond" objection to cluelessness [1]). I think their longterm impact is substantial but that we just have no clue if it's good or bad because that depends on so many longterm factors the people carrying out these short-term interventions ignore and/or can't possibly estimate in an informative non-arbitrary way.

So I don't know how to respond to your first question because it seems it implictly assumes something I find impossible and goes against how causality works in our complex World (?)

> Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall.

Answering the second question:
1. Yes, one could argue that. 
2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.
3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.

And to develop on 1:

> Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?

- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.
- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.
- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.
- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness. 
- Some pill helped people like Alice at first but then made their lives worse.
- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.

[1] See e.g. James Lenman's and Hilary Greaves' work on cluelessness for rejections of this argument.

Interesting! I guess one could have made a similar observation/forecast in the past while thinking about whether some people would settle on (quasi-)uninhabited continents.

Do you think there are important differences to note between this case and the one you discuss, besides "settling outside our solar system is (ofc) more challenging than settling other continents on Earth"?

I don't think Andreas Morgesen ever gave a talk on his (imo underrated) work on maximal cluelessness which has staggering implications for longtermists. And I find all the arguments that have been given against his conclusions (see e.g the comments under the above-linked post or under this LW question from Anthony DiGiovanni) quite unconvincing.

My main crux regarding inter-civ selection effect is how fast will space colonization get. F.e. if it's possible to produce small black holes, you can use them for an incredibly efficient propulsion and even just slightly grabby civs still spread at approximately the speed of light - roughly the same speed as extremely grabby civs. Maybe it's also possible with fusion propulsion but I'm not sure - you'd need to ask astro nerds.

I haven't thought about whether this should be the main crux but very good point! Magnus Vinding and I discuss this in this recent comment thread.

I guess the main hope is not that morality gives you a competitive edge (that's unlikely) but rather that enough agents stumble on it anyway, f.e. realizing open/empty individualism is true, through philosophical reflection.

Yes. Related comment thread I find interesting here.

Yeah so I was implicitly assuming that even the fastest civilizations don't "easily" reach the absolute maximum physically possible speed such that what determines their speed is the ratio resources spent on spreading as fast as possible [1] : resources spent on other things (e.g., being careful and quiet).

I don't remember thinking about whether this assumption is warranted however. If we expect all civs to reach this maximum physically possible speed without needing to dedicate 100% of their resources to this, this drastically dampens the grabby selection effect I mentioned above.

[1] which if maximized would make the civilization loud by default in absence of resources spent on avoiding this I assume. (harfe in this thread gives a good specific instance backing up this assumption)

Interesting! Fwiw, the best argument I can immediately think of against silent cosmic rulers being likely/plausible is that we should expect these to expand slower than typical grabby aliens and therefore be less "fit" and selected against (see my Grabby Values Selection Thesis -- although it seems to be more about expansion strategies than about values here). 

Not sure how strong this argument is though. The selection effect might be relatively small (e.g. because being quiet is cheap or because non-quiet aliens get blown up by the quiet ones that were "hiding"?).

Interesting, thanks for sharing your thoughts on the process and stuff! (And happy to see the post published!) :)

Load more