Most people in EA are afraid of extinction risk. If the expected value of human is net positive, then we really should prevent human's extinction. There are a lot of uncertainties, such as:AI, the importance of s-risk, the evolution of human... I think human's future is like chaos . Can we estimate human's future is net-positive or net-negative objectively? or we can only rely on our moral intuition?
In addition to Fin's considerations and the excellent post by Jacy Anthis, I find Michael Dickens' analysis to be useful and instructive. What We Owe The Future also contains a discussion of these issues.
Thanks for your answers very much. In summarize, though the destiny of human is hard to predict, but since human is mostly altruistic, and humans are good at improving our lifes, so it's net positive.It really makes sense to me. But I'm still very unsure about this, because: 1.We may need to consider suffering-focused ethics or cases like hedonic treadmill. 2.Are humans really altruistic? Will we be more selfish if we're facing disasters? 3.Are we too optimistic and naive on our future? I think they are moral uncertainties.