Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
Solving the AGI alignment problem demands a herculean level of ambition, far beyond what we're currently bringing to bear. Dear Reader, grab a pen or open a google doc right now and answer these questions:
1) What would you do right now if you became 5x more ambitious?
2) If you believe we all might die soon, why aren't you doing the ambitious thing?
Hi Vasco! I'm keen for you to paint me a persona. Specifically; who is the kind of person that thinks sinking 10k into a bet with an EA (i.e. you) is a better use of money than all the other ways to help make AI go better (by making it as a donation)?
Even if you were big on bets for signalling purposes, I think its easy to argue that making one of this size with an EA on a niche forum isn't the way to do it (i.e. find someone more prominent and influential on X or similar).
I think people need to start considering how pros and cons change if we get TAI in ≤ 4 years.
Sure, you built some career capital, but what was the cost (especially counterfactually)?
We might need more people to start asking the question; how can I have a hugely positive impact in the next 12 months?
The fucking arrogance is astonishing. To think you can just make this decision on behalf of <gestures broadly> literally everyone is hard to empathise with.
We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else.
This comment is in no man’s land: not funny enough to be a good joke, not relevant enough to add value.
Consider asking an LLM for feedback before posting. Unless the goal is to troll?