Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take:

  • The US is falling apart rapidly (on the scale of years), as evident in US politics departing from sanity and honor, sharp polarization, violent civil unrest, hopeless pandemic responses, ensuing economic catastrophe, one in a thousand Americans dying by infectious disease in 2020, and the abiding popularity of Trump in spite of it all.
  • Western civilization is declining on the scale of half a century, as evidenced by its inability to build things it used to be able to build, and the ceasing of apparent economic acceleration toward a singularity.
  • AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots:
    • ‘Aligned’ AI is necessary for a non-doom outcome, and hard.
    • Arms races worsen things a lot.
    • The order of technologies matters a lot / who gets things first matters a lot, and many groups will develop or do things as a matter of local incentives, with no regard for the larger consequences.
    • Seeing more clearly what’s going on ahead of time helps all efforts, especially in the very unclear and speculative circumstances (e.g. this has a decent chance of replacing subplots here with truer ones, moving large sections of AI-risk effort to better endeavors).
    • The main task is finding levers that can be pulled at all.
    • Bringing in people with energy to pull levers is where it’s at.
  • Institutions could be way better across the board, and these are key to large numbers of people positively interacting, which is critical to the bounty of our times. Improvement could make a big difference to swathes of endeavors, and well-picked improvements would make a difference to endeavors that matter.
  • Most people are suffering or drastically undershooting their potential, for tractable reasons.
  • Most human effort is being wasted on endeavors with no abiding value.
  • If we take anthropic reasoning and our observations about space seriously, we appear very likely to be in a ‘Great Filter’, which appears likely to kill us (and unlikely to be AI).
  • Everyone is going to die, the way things stand.
  • Most of the resources ever available are in space, not subject to property rights, and in danger of being ultimately had by the most effective stuff-grabbers. This could begin fairly soon in historical terms.
  • Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)
  • There are vast quantum worlds that we are not considering in any of our dealings.
  • There is a strong chance that we live in a simulation, making the relevance of each of our actions different from that which we assume.
  • There is reason to think that acausal trade should be a major factor in what we do, long term, and we are not focusing on it much and ill prepared.
  • Expected utility theory is the basis of our best understanding of how best to behave, and there is reason to think that it does not represent what we want. Namely, Pascal’s mugging, or the option of destroying the world with all but one in a trillion chance for a proportionately greater utopia, etc.
  • Consciousness is a substantial component of what we care about, and we not only don’t understand it, but are frequently convinced that it is impossible to understand satisfactorily. At the same time, we are on the verge of creating things that are very likely conscious, and so being able to affect the set of conscious experiences in the world tremendously. Very little attention is being given to doing this well.
  • We have weapons that could destroy civilization immediately, which are under the control of various not-perfectly-reliable people. We don’t have a strong guarantee of this not going badly.
  • Biotechnology is advancing rapidly, and threatens to put extremely dangerous tools in the hands of personal labs, possibly bringing about a ‘vulnerable world’ scenario.
  • Technology keeps advancing, and we may be in a vulnerable world scenario.
  • The world is utterly full of un-internalized externalities and they are wrecking everything.
  • There are lots of things to do in the world, we can only do a minuscule fraction, and we are hardly systematically evaluating them at all. Meanwhile massive well-intentioned efforts are going into doing things that are probably much less good than they could be.
  • AI is powerful force for good, and if it doesn’t pose an existential risk, the earlier we make progress on it, the faster we can move to a world of unprecedented awesomeness, health and prosperity.
  • There are risks to the future of humanity (‘existential risks’), and vastly more is at stake in these than in anything else going on (if we also include catastrophic trajectory changes). Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously.
  • The world is controlled by governments, and really awesome governance seems to be scarce and terrible governance common. Yet we probably have a lot of academic theorizing on governance institutions, and a single excellent government based on scalable principles might have influence beyond its own state.
  • The world is hiding, immobilized and wasted by a raging pandemic.

It’s a draft. What should I add? (If, in life, you’ve chosen among ways to improve the world, is there a simple story within which your choices make particular sense?)

103

0
0
1

Reactions

0
0
1

More posts like this

Comments26
Sorted by Click to highlight new comments since:

I think it's also of noteworthy to include the trillions of sentient farmed animals that are and will be exploited and are being put through intense suffering for rest of the future as the demand for animal products continues to increase . Also the gigantic scale of suffering of the wild animals most of whom suffer and die in painful ways soon after coming into existence.

Some things worth adding might be:

  • Several Asian economies are growing rapidly, and China is on track to become a major world power sometime this century (worth including since you mention the apparent decline of the US/West)
  • There is massive global inequality, and while many lower income countries are now growing more steadily they are not projected to narrow the north/south wealth divide anytime soon
  • Humans are raising billions of animals for food in very poor conditions

>There is massive global inequality...

One could add: "and disparities in power might increase and lead us to some sort of techno-feudalism."

It's really cool to see these laid out next to another like this! Thanks for posting  Katja :) 

We (most humans in most of the world) lived or are living in a golden age, with more material prosperity and better physical health* than ever before. 2020 was shitty, and the second derivative might be negative, but the first derivative still looks clearly positive on the timescale of decades, as well as a (measured from history, not counterfactual) really high baseline. On a personal level, my consumption is maybe 2 orders of magnitude higher than that of my grandparents  at my age(might become closer to 3 if I was less EA). So I'd be interested in adding a few sentences like:

  • For the first time in recorded history, the vast majority of humans are much richer than their ancestors.
  • Even in the midst of a raging pandemic, human  deaths from infectious disease still account for less than 1/3 of all deaths.
  • People have access to more and better information than ever before.

I think as EAs, it's easy to have a pretty negative view of the world (because we want to fix on what we can fix, and also pay attention to a lot of things we currently can't fix in the hopes that one day we can figure out what to fix later), but obviously there is still a lot of good in the world (and there might be much more to come), and it might be valuable to have concrete reminders of what we ought to cherish and protect.

* I think it's plausible/likely that we're emotionally and intellectually healthier as well, but this case is more tenuous. 

Related to wealth: I recently heard Tyler Cowen describing himself as an "information billionaire" and hoping to become an information trillionaire. I wonder how one would quantify it, but it seems true that our ability to understand the world is also growing rapidly.

Yeah, I agree with that. 

On this, I really like this brief post from Our World in Data: The world is much better; The world is awful; The world can be much better. (Now that I have longtermist priorities, I feel like another useful slogan in a similar spirit could be something like "The world could become so much better; The world could end or become so much worse; We could help influence which of those things happens.")

>with more material prosperity and better physical health* than ever before

I agree. But you see, in some population dynamics, variation is correlated with increased risk of extinction.

>my consumption is maybe 2 orders of magnitude higher than that of my grandparents  at my age

That might be precisely part of the problem. We are just starting to be seriously concerned about the externalities of this increase in consumption, and a good deal of it is conspicuous or with things people often regret (over)consuming (like soft drinks, addictive stuff, or just time spent in social media) - while a lot of people still starve.

Thanks for your comment!

I agree. But you see, in some population dynamics, variation is correlated with increased risk of extinction.

I think I don't follow your point. If I understand correctly, the linked paper (at least from the abstract, I have not read it) talks about population-size variation, which has an intuitive/near-tautological relationship with increased risk of extinction, rather than variation overall. 

That might be precisely part of the problem. 

Sorry can you specify more what the problem is? If you mean that the problem is an inefficient distribution of limited resources, I agree that it's morally bad that I have access to a number of luxuries while others starve, and the former is casually upstream of the latter. However, in the long run we can only get maybe 1-2 orders of magnitude gains from a more equitable distribution of resources globally (though some rich  individuals/gov'ts can create more good than that by redistributing their own resources), but we can get much more through other ways to create more stuff/better experiences. 

We are just starting to be seriously concerned about the externalities of this increase in 

consumption

Who's this "we?" :P
 
 

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

Worse: most of the members of that species don't realize this responsibility, and indeed consistently act against it, either to satisfy self-regarding or parochial preferences

  • Most human effort is being wasted on endeavors with no abiding value.
  • Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)

 

Things certainly feel very doom & gloom right now, but I still think there is scope for optimism in the current moment. If I had been asked in February last year what the best and worst outcomes would have been of the pandemic a year later, I would probably have guessed a whole lot worse than what turned out to be the case. I also don't think that we are living in some special age of incompetent governance right now, and I would argue that throughout history we have come up with policies that have been disastrously wrong one way or the other. Competence have appeared elsewhere - as Tyler Cowen has argued, businesses seem unusually competent in the current crises compared to governments. Where would we have been without supermarkets' supply chains, Amazon, Pfizer, Zoom etc during the pandemic? According to this article there are more reasons to be optimistic than pessimistic right now:

  • As people lose jobs and income, many go hungry. Projections from the Food and Agricultural Organization point to an increase in the global number of chronically undernourished from 8.9 to around 9.9 per cent. A terrible outcome, but it still represents a reduction by a quarter since 2000.
  • It took mankind 3,000 years to develop a vaccine against polio and smallpox. Moderna designed a vaccine against Covid-19 in two days. Had we faced this new coronavirus in 2005, we would not have had the technology to even imagine such mRNA vaccines, if it had appeared in 1975 we would not have the ability to read the genome of the virus, if it came in 1950, we would not have had a single ventilator on the planet.
  • [T]he progress of the last few decades has been so fast, and human creativity under duress so impressive, that even major setbacks only pushes us back a few years. Only three years in history have been better in terms of GDP per capita, extreme poverty and child mortality – 2017, 2018 and 2019.

Thanks for doing this! 

One suggestion - I think it would be cool to have more links included so that people can read more if they're interested. 

The following statements from Luke Muehlhauser feel relevant:

Basically, if I help myself to the common (but certainly debatable) assumption that “the industrial revolution” is the primary cause of the dramatic trajectory change in human welfare around 1800-1870, then my one-sentence summary of recorded human history is this:

>Everything was awful for a very long time, and then the industrial revolution happened.

(The linked post provides interesting graphs and discussion to justify/flesh out this story.)

Though I guess that's less of a plot of the present moment, and more of a plot of the moment's origin story (with hints as to what the plot of the present  moment might be).

Through overpopulation and  excessive consumption, humanity is depleting its natural resources, polluting its habitat, and causing the extinction of other species. Continuing like this will lead to the collapse of civilisation and likely our own extinction. 

 

This one seems very common to me, and sadly people often feel fatalistic about it. 

Two things that feeling might come from:

  • People rarely talking about aspects of it which are on a positive trajectory (e.g. the population of whales, acid rain, CFC emissions, UN population projections). 
  • The sense that there are so related things to solve - such that even if we managed to fix (say) climate change then we'd still see (say) our fisheries cause the collapse of the ocean's ecosystem. 

Thank you, I found this pretty interesting. Of course no single one-sentence narrative will capture everything that goes on in the world, but in practice we need to reduce complexity and focus anyway, and may implicitly adopt similar narratives anyway, so I found it interesting to reflect explicitly on them.

FWIW, the one that resonates most for me personally was:

  • There are risks to the future of humanity (‘existential risks’), and vastly more is at stake in these than in anything else going on (if we also include catastrophic trajectory changes). Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously.

A lot of the ones appealing to 'weird' issues (acausal trade, quantum worlds, simulations, ...) ring true and important to me, but seem less directly relevant to my actual actions.

My reaction to a lot of the 'generic' ones (externalities, wasted efforts, ...) is something like: "This sounds true, but I'm not sure why I should think I'll be able to do something about this."

Another possible story, which could underpin some efforts along the lines of patient altruism / punting to the future: "There will probably be key actions that need taking in the coming decades, centuries, or millennia, which will have a huge influence over the whole rest of the future. There are some potential ways to set up future people to take those actions better in expectation, yet very few people are thinking strategically and working intensely on doing that. So that's probably the best thing we can do right now."

Those "potential ways" of punting to the future could be things like building a community of people with good values and epistemics or increasing the expected future wealth or influence of such people.

And this story could involve thinking there will be a future time that's much "higher leverage" / more "hingey" / more "influential", or thinking that there are larger returns to some ways of "punting to the future", or both. 

(See also.)

(Personally, I find this sort of story at least plausible, and it influences me somewhat.)

The US is falling apart rapidly (on the scale of years), as evident in US politics departing from sanity and honor, sharp polarization, violent civil unrest, hopeless pandemic responses, ensuing economic catastrophe, one in a thousand Americans dying by infectious disease in 2020, and the abiding popularity of Trump in spite of it all.


(I note that you're just outlining potential worldviews, not necessarily defending them)

I don't think this is all that unique to the US. I think at least 5 out of 7 of these things could also be applied to the UK and France; the UK has a higher COVID-19 death rate than the US and there has been ongoing civil unrest in France for over two years now. In fact, the US is outside the top 10 in terms of COVID-19 deaths per capita.

This doesn't mean I'm pessimistic about all of those countries too - it just makes me think that this is how the world looks when we experience a pandemic (and... use Twitter?). 

I'm curious if there's a point about energy use that's large enough to be added to the list. Intuitively I think no (for the same reason that climate change doesn't seem as important as the above points), but on the scale of centuries, the story of humanity is intertwined with the story of energy use, so perhaps on an outside view this is just actually really underrated and important.

Infinite Ethics is solved by LDT btw. The multiverse is probably infinite (I don't know where this intuition comes from but come it does), but if so, there are infinite instances of you strewn through it, and you are effectively controlling all of them acausally. Some non-zero measure of all of that is entangled with your decisions.

Personally, the simple stories that I pretty much endorse, and that are among the stories within which my choices would make sense, are basically "low-confidence", "expected value", and/or "portfolio" versions of some of these (particularly those focused on existential risks). One such story would be:

There's a non-trivial chance that there are risks to the future of humanity (‘existential risks’), and that vastly more is at stake in these than in anything else going on. Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously. So, in expectation, it'd be a really, really good idea if some people acted to reduce these risks.

("Non-trivial" probably understates my actual beliefs. When I forced myself to try to estimate total existential risk by 2120, I came up with a very tentative 13%. But I think I might behave similarly even if my estimate was quite a bit lower.)

What I mean by "portfolio" versions is basically that I think I'd endorse tentative versions of a wide range of the stories you mention, which leads me to think there should be at least some people focused on basically acting as if each of those stories are true (though ideally remembering that that's super uncertain). And then I can slot into that portfolio in the way that makes sense on the margin, given my particular skills, interests, etc.

(All that said, I think there's a good argument for stating the stories more confidently, simply, and single-mindedly for the purposes of this post.)

>Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)

I wonder if, in this context, metaethical discussions are overrated. Even if philosophical debates that open the door to nihilism and are endemic in the rationalist community - like Pascal’s mugging, infinite utility, Boltzmann brain (or any simulation / Platonic cave-like reasoning) etc. - are  serious philosophical conundrums, they don't seem (at least from a pragmatic perspective, taking normative uncertainty analysis into account) to entail any relevant change of course in the foreseeable future. I mean, nihilism might be true, but unless you’re certain about it, it doesn’t seem to be practically relevant for decision-making.

Another potential story could go something like this: "Advances in artificial intelligence, and perhaps some other technologies, have begun to have major impacts on the income, wealth, and status of various people, increasing inequality and sometimes increasing unemployment. This then increases dissatisfaction and instability with our political and economic systems. These trends are all likely to increase in future, and this could lead to major upheavals and harms."

I'm not sure if all those claims are accurate, and don't personally see that as one of the most important stories to be paying attention to. But it seems plausible and somewhat commonly believed among sensible people.

AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots: ...

I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from what's stated there. For example, I think I'm more inclined towards the following generalised version of that story:

AI systems will control the future or simply destroy our future, and how our actions influence the way that plays out is the only thing about our time that will matter in the long run. Major subplots: ...

This version of the story could capture: 

  • The possibility that the AI systems rapidly lead to human extinction but then don't really cause any other major things in particular, and have no [other] goals
    • I feel like it'd be odd to say that that's a case where the AI systems "control the future"
  • The possibility that the AI systems who cause these consequences aren't really "agents" in a standard sense
  • The possibility that what matters about our time is not simply "which [agents] we create", but also things like when and how we deploy them and what incentive structures we put them in

One thing that that "generalised story" still doesn't clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent that's aligned with the actor, or a set of AI services/tools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)

I recommend this web page for a narrative on what's happening in our world in the 21st century. It covers many themes such as the rise of the internet, the financial crisis, covid, global warming, AI and demographic decline.

Really, thanks for the post. I think it's quite important to have such a list.

  • If we take anthropic reasoning and our observations about space seriously, we appear very likely to be in a ‘Great Filter’, which appears likely to kill us (and unlikely to be AI).

I’m not sure if we could say “very likely,” though the odds are surely relevant. I'm no expert, but I guess the case about the solution to the Fermi Paradox is still open, ranging from what prob distribution one uses to model the problem, to our location in the Milky Way. For instance, being “close” to its boarder might make it easier for us to survive extreme events happening in more central (and crowded) regions, but also harder to spot activity in the other side of the galaxy.

And, if there’s a Great Filter ahead, I think one can say “it’s unlikely to be AI” only in the same sense we can say “Team A is the favorite, but it’s unlikely to be the winner – too many other competitors.” I don’t see, right now, better candidates for a Great Filter than some surprising technological innovation.

Curated and popular this week
Relevant opportunities