In this question, I am assuming ethical longtermism – that our objective is to maximize total well-being over the long term. It seems like many longtermist EAs believe that the most high-impact way to improve the far future is to reduce existential risks to humanity. However, there are other ways to improve the far future: speeding up technological progress, speeding up moral progress, improving institutions, settling space, and so on. (I think of these as improving the quality of the far future, conditioned on not having an existential catastrophe.) What are some arguments for why existential risk is more pressing than these other levers, or vice versa?
[Edit: I'm especially interested in which lever is most pressing when we take the welfare of non-human animals into account.]