The definition of existential risk as ‘humanity losing its long term potential’ in Toby Ord precipice could be specified further. Without (perhaps) loss of generality, assuming finite total value in our universe, one could specify existential risks into two broad categories of risks such as:
Extinction risks (X-risks): Human share of total value goes to zero. Examples could be extinction from pandemics, extreme climate change or some natural event.
Agential risks (A-risks): Human share of total value could be greater than in the X-risks scenarios but keeps being strictly dominated by the share of total value holded by misaligned agents. Examples could be misaligned institutions, AIs or loud alienscontrolling most of the value in the universe and with whom there would be little gain from trade to be hoped for.
The definition of existential risk as ‘humanity losing its long term potential’ in Toby Ord precipice could be specified further. Without (perhaps) loss of generality, assuming finite total value in our universe, one could specify existential risks into two broad categories of risks such as: