I read this post, where a tentative implication of recent AI advacements was:
"AI risk is no longer a future thing, it’s a ‘maybe I and everyone I love will die pretty damn soon’ thing. Working to prevent existential catastrophe from AI is no longer a philosophical discussion and requires not an ounce of goodwill toward humanity, it requires only a sense of self-preservation."
Do you believe that or something similar? Are you living as if you believe that? What does living that life look like?