Hi, I'm Max :)
I was also a participant. I engaged less than I wanted mostly due to the amount of effort this demanded and losing more and more intrinsic motivation.
Some vague recollections:
What could have caused me to engage more with others?
OpenAI lobbied the European Union to argue that GPT-4 is not a ‘high-risk’ system. Regulators assented, meaning that under the current draft of the EU AI Act, key governance requirements would not apply to GPT-4.
Somebody shared this comment from Politico, which claims that the above article is not an accurate representation:
European lawmakers beg to differ: Both Socialists and Democrats’ Brando Benifei and Renew’s Dragoș Tudorache, who led Parliament’s work on the AI Act, told my colleague Gian Volpicelli that OpenAI never sent them the paper, nor reached out until 2023. When he met an OpenAI delegation in April, Tudorache said, the relevant text had already been agreed upon.
A simple analogy to humans applies here: Some of our goals would be easier to attain if we were immortal or omnipotent, but few choose to spend their lives in pursuit of these goals.
I feel like the "fairer" analogy would be optimizing for financial wealth, which is arguably also as close to omnipotence as one can get as a human, and then actually a lot of humans are pursuing this. Further, I might argue that currently money is much more of a bottleneck for people than longevity for ~everyone to pursue their ultimate goals. And for the rare exceptions (maybe something like the wealthiest 10k people?) those people actually do invest a bunch in their personal longevity? I'd guess at least 5% of them?
I spontaneously thought that the EA forum is actually a decentralizing force for EA, where everyone can participate in central discussions.
So I feel like the opposite, making the forum more central to the broader EA space relative to e.g. CEAs internal discussions, would be great for decentralization. And calling it „Zephyr forum“ would just reduce its prominence and relevance.
Moral stigmatization of AI research would render AI researchers undateable as mates, repulsive as friends, and shameful to family members. Parents would disown adult kids involved in AI. Siblings wouldn’t return their calls. Spouses would divorce them. Landlords wouldn’t rent to them.
I think such a broad and intense backlash against AI research broadly is extremely unlikely to happen, even if we put all our resources on it.
I'd be very surprised if AI will predominantly be considered risk-free in long-timelines worlds. The more AI will be integrated into the world, the more it will interact with and cause harmful events/processes/behaviors/etc., like take the chatbot that apparently facilitated a suicide.
And I take Snoop Doggs reaction to recent AI progress as somewhat representative of a more general attitude that will get stronger even with relatively slow and mostly benign progress
Well I got a motherf*cking AI right now that they did made for me. This n***** could talk to me. I'm like, man this thing can hold a real conversation? Like real for real? Like it's blowing my mind because I watched movies on this as a kid years ago. When I see this sh*t I'm like what is going on? And I heard the dude, the old dude that created AI saying, "This is not safe, 'cause the AIs got their own minds, and these motherf*ckers gonna start doing their own sh*t. I'm like, are we in a f*cking movie right now, or what? The f*ck man?
I.e. it will continuously feel weird and novel and worth pondering where AI progress is going and where the risks are, and more serious people will join doing this which will again increase the credbility of those concerns.
Thanks for sharing, I like how concrete all of this is and think it's generally a really important practice.
One "hack" that came to mind that I think helped me feeling more relaxed about the prospect of even pretty harsh criticism: Think of some worst cases already in advance. Like when you do a project/plan your life, consider the hypotheses that e.g.
Internally I expect even harsh criticism to then kinda feel like "Yeah good point, but also haha, I already kinda considered that and you merely cause me to update!" xD
Hmm, fwiw, I spontaneously think something like this is overwhelmingly likely.
Even in the (imo unlikely) case of AI research basically stagnating from now on, I expect AI applications to have effects that will significantly affect the broader public and not make them think anything close to "what a nothingburger" (e.g. like I've heard it happen for nanotechnology). E.g. I'm thinking of things like the broad availabiltiy of personal assistants & AI companions, automating of increasingly many tasks, impacts on education, on the productivity of software developers.
And in case we'll also see a stagnation of significant applications, I expect this would be caused by some external event (e.g. say a severe economic or financial crisis) that will also make people not think of the current moment as crying wolf.
Most news outlets seem to jump on everything he does.
That's where my thoughts went, maybe he and/or CAIS thought that the statement would have a higher impact if reporting focuses on other signatories. That Musk thinks AI is an x-risk seems fairly public knowledge anyways, so there's no big gain here.
Fwiw, despite the tournmant feeling like a drag at points, I think I kept at it due to a mix of:
a) I committed to it and wanted to fulfill the committment (which I suppose is conscientiousness),
b) me generally strongly sharing the motivations for having more forecasting, and
c) having the money as a reward for good performance and for just keeping at it.