Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo
No, alas. However I do have this short summary doc I wrote back in 2021: The Master Argument for <10-year Timelines - Google Docs
And this sequence of posts making narrower points: AI Timelines - LessWrong
The XPT forecasters are so in the dark about compute spending that I just pretend they gave more reasonable numbers. I'm honestly baffled how they could be so bad. The most aggressive of them thinks that in 2025 the most expensive training run will be $70M, and that it'll take 6+ years to double thereafter, so that in 2032 we'll have reached $140M training run spending... do these people have any idea how much GPT-4 cost in 2022?!?!? Did they not hear about the investments Microsoft has been making in OpenAI? And remember that's what the most aggressive among them thought! The conservatives seem to be living in an alternate reality where GPT-3 proved that scaling doesn't work and an AI winter set in in 2020.
- I haven’t considered all of the inputs to Cotra’s model, most notably the 2020 training computation requirements distribution. Without forming a view on that, I can’t really say that ~53% represents my overall view.
Sorry to bang on about this again and again, but it's important to repeat for the benefit of those who don't know: The training computation requirements distribution is by far the biggest cruxy input to the whole thing; it's the input that matters most to the bottom line and is most subjective. If you hold fixed everything else Ajeya inputs, but change this distribution to something I think is reasonable, you get something like 2030 as the median (!!!) Meanwhile if you change the distribution to be even more extreme than Ajeya picked, you can push timelines arbitrarily far into the future.
Investigating this variable seems to have been beyond scope for the XPT forecasters, so this whole exercise is IMO merely that -- a nice exercise, to practice for the real deal, which is when you think about the compute requirements distribution.
Another nice story! I consider this to be more realistic the previous one about open-source LLMs. In fact I think this sort of 'soft power takeover' via persuasion is a lot more probable than most people seem to think. That said, I do think that hacking and R&D acceleration are also going to be important factors, and my main critique of this story is that it doesn't discuss those elements and implies that they aren't important.
In addition to building more data centers, MegaAI starts constructing highly automated factories, which will produce the components needed in the data centers. These factories are either completely or largely designed by the AI or its subsystems with minimal human involvement. While a select few humans are still essential to the construction process, they are limited in their knowledge about the development and what purpose it serves.
It would be good to insert some paragraphs, I think, about how FriendlyFace isn't just a single model but rather is a series of models being continually improved in various ways perhaps, and how FriendlyFace itself is doing an increasing fraction of the work involved in said improvement. By the time there are new automated factories being built that humans don't really understand, presumably basically all of the actual research is being done by FriendlyFace and presumably it's far smarter than it was at the beginning of the story.
I think it mostly means that you should be looking to get quick wins. When calculating the effectiveness of an intervention, don't assume things like "over the course of an 85-year lifespan this person will be healthier due to better nutrition now." or "this person will have better education and thus more income 20 years from now." Instead just think: How much good does this intervention accomplish in the next 5 years? (Or if you want to get fancy, use e.g. a 10%/yr discount rate)
See Neartermists should consider AGI timelines in their spending decisions - EA Forum (effectivealtruism.org)
Also, if you do various searches on LW and Astral Codex Ten looking for comments I've made, you might see some useful ones maybe.