lightcone maybe at lightcone
I think some were false. For example, I don't get the stuff about mini-drones undermining nuclear deterrence, as size will constrain your batteries enough that you won't be able to do much of anything useful. Maybe I'm missing something (modulo nanotech).
I think it's very plausible scaling holds up, it's plausible AGI becomes a natsec matter, it's plausible it will affect nuclear deterrence (via other means), for example.
What do you disagree with?
I agree with much of Leopold's empirical claims, timelines, and analysis. I'm acting on it myself in my planning as something like a mainline scenario.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.
Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.
As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece like this would have been something like "epistemically cancelled": fairly strongly decried as violating important norms around reasoning and cooperation. I actually expect that had someone written this publicly in 2016, they would've plausibly been uninvited as a speaker to any EAGs in 2017.
I don't particularly want to debate whether these epistemic boundaries were correct --- I'd just like to claim that, empirically, I think they de facto would have been enforced. Though, if others who have been around have a different impression of how this would've played out, I'd be curious to hear.
Happened to come across this old comment thread discussion whether holding too much Facebook stock was too risky. In the four years since the comment on Sep 21, 2020, Meta stock is up >100% and at an all time high. However, before reaching that point, it also had as large as a 60% drawdown vs the Sep 21 value, which occurred in late 2022 (notably, around the time of the FTX collapse).
(I haven't read the full comment here and don't want to express opinions about all its claims. But for people who saw my comments on the other post, I want to state for the record that based on what I've seen of Richard Hanania's writing online, I think Manifest next year would be better without him. It's not my choice, but if I organised it, I wouldn't invite him. I don't think of him as a "friend of EA".)
No, I think this is again importantly wrong.
First, this was published in the Guardian US, not the Guardian.
The Guardian US does not have half the traffic of the NYTimes. It has about 15% the traffic, far as I can tell (source). The GuardianUS has 200k Twitter followers; The Guardian has 10M Twitter followers (so 2% of the following).
Second, I scrolled through all the tweets in the link you sent showing "praise". I see the following:
You can of course compare this to:
So I think this just clearly proves my point: the majorty of engagement of this article on Twitter is just commenting on it being a terrible hit piece.
The tiny wave of praise came mostly from folks well known for bad faith attacks on EA, a strange trickle of no-to-low engagement retweets, 1-2 genuine professors, and, well, Shakeel.
Ah! I was wrong to claim you made "no" such comments. I've edited my above comment.
Now, I of course notice how you only mention "lots of mistakes" after Jeffrey objects, and after it's become clear that there is a big outpouring of hit piece criticism, and only little support.
Why were you glad about it before then?
Did you:
In the follow-up tweet you say: "Glad to see the press picking [this story] up (though wish they made the rationalist/EA distinction clearer!)"
So far as I've found, you've made no comments indicating that you disagree with the problematic methodology of the piece, and two comments saying you were "delighted" and "glad" with parts of it. I think my quote is representative. I've updated my comment for clarity.
Nonetheless: how would you prefer to be quoted?
EDIT: Shakeel posted a comment pointing to a tweet of his "mistakes" in the post, and I was wrong to claim there were no comments.
(instead of making all comments on both places, ill continue discussing over at lesswrong https://www.lesswrong.com/posts/i5pccofToYepythEw/against-aschenbrenner-how-situational-awareness-constructs-a#Hview8GCnX7w4XSre )