This doesn't directly address your question, but I think in general EAs spend too much time engaging with e/acc* folks online, and I think we should generally just ignore them more. The arguments put out by e/acc seem to be unusually bad (and from what I can tell, likely made with an unusually high amount of motivated reasoning), and they're also not a politically or socially powerful or persuasive group.
* note – I'm talking specifically about e/acc. There are other people who are more generally AI accelerationists who I think it is important for us to engage with, for various reasons.
Can you be a little bit more specific about the exact implementation of approval voting that would be implemented here? Specifically I'm wondering:
Maybe occasional lunches and a coffee machine
Flagging that I think lunches every single day may be a great productivity investment for an org to make – the alternative is likely people leaving the office to find lunch, taking a significantly longer lunch break. The cost is only several dollars a day per employee, versus a benefit of perhaps several times that. Ditto for having good coffee and snacks available at all times.
For AGI to do most human work for <$25/hr by 2043, many things must happen.
I don't think this is necessarily the right metric, for the same reason that I think the following statement doesn't hold:
transformative AGI is a much higher bar than... even the unambiguous attainment of expensive superhuman AGI
Basically, while the contest rules do say, "By 'AGI' we mean something like 'AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less'" they then go on to clarify, "What we’re actually interested in is the potential existential threat posed by advanced AI systems." I think the natural reading of this definition is that AGI that (severely threatened to) cause human extinction or the permanent disempowerment of humanity would qualify as TAI, and I think my interpretation would further be more consistent with the common definition that TAI would be "AI having an impact at least as large as the Industrial Revolution." Further, I think expensive superhuman AGI would threaten to cause an existential catastrophe in a way that would qualify it for my interpretation.
If we then look at your list, under my interpretation, we no longer have to worry about "AGI inference costs drop below $25/hr (per human equivalent)", nor "We invent and scale cheap, quality robots", and possibly not others as well (such as "We massively scale production of chips and power"). If we just ignore those 2 cruxes, (and assume your other numbers hold) then we're up to 4%. If we further ignore the one about chips & power, then we're up to 9%.
Yeah, I was going to post that tweet. I'd also like to mention my related thread that if you have a history of crying wolf, then when wolves do start to appear, you’ll likely be turned to as a wolf expert.
Oh wow, that's actually pretty good