Interesting, thanks! Any thoughts on how we should think about the relative contributions and specialization level of these different authors? ie, a world of maximally important intangibles might be one where each author was responsible for tweaking a separate, important piece of the training process.
My rough guess is that it's more like 2-5 subteams working on somewhat specialized things, with some teams being moderately more important and/or more specialized than others.
Does that framing make sense, and if so, yeah, what do you think?
Paul Christiano thinks there's a 1/3 chance Tesla gets fully self-driving cars by 2024, and expects that conditional on that their market cap has probably >tripled to >$3T. that's pretty insane commercial value right there.
In scientific applications, one obvious thought is advances on AlphaFold that enable better drug design. I'm not a domain expert, but I think that might require significant improvements from AlphaFold v2-- moving beyond crystal structure prediction to in-solution structure prediction and to protein-protein interaction modeling.
I've heard from two casual programmer friends that AI programming assistants like Github Copilot are impressively good. They make it easier to write various finicky pieces of code, and help fix bugs. It seems to me like this could be really impactful if it turns out to help professional programmers; there's a lot of value to add, and potentially this could be turned towards AI programming itself...
Thanks for sharing this, Zoe!
I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don't agree with all your points or the ways you frame them.
Things that would make me excited to read future work, and IMO would make that work stronger:
With regard to harshness, I think part of the reason you get different responses is because you're writing in the genre of the academic paper. Since authors have to write in a particular formal style, it's ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it's not crazy to read their judgments into the text, but different readers will draw different conclusions about what you want them to feel or believe.
For example:
Under the TUA, an existential risk is understood as one with the potential to cause human
extinction directly or lead us to fail to reach our future potential, expected value, or
technological maturity. This means that what is classified as a prioritised “risk” depends on a
threat model that involves considerable speculation about the mechanisms which can result in the death of all humans, their respective likelihoods, and a speculative and morally loaded
assessment of what might constitute our inability to reach our potential.[...]
A risk perception that depends so strongly on speculation and yet-to-be-verified assumptions will inevitably (to varying degrees) be an expression of researchers’ personal preferences, biases, and imagination. If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.
As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it's easy to read into it some amount of value judgment around longtermism and longtermists.
People who worked on the campaign can speak to this better than I can, but I would give them more credit for doing reasonable due diligence. I have a strong expectation that:
I also think there can be a meaningful difference between knowing on paper that "having connections in the district is important" and "spending money can help you win" and "having a voting record is helpful", and seeing how those factors actually play out in practice. That said, I hope (and expect) that there was more "know-how" generated by the race than just the lessons reflected in this post.