One way for us to find that out would be for the person who was sent the memo and thought it was a silly idea to make themselves known, and show the evidence that they shot it down or at least assert publicly that they didn't encourage it.
Since there seems to be little downside to them doing so if that's what happened, if no-one makes such a statement we should increase our credence that they were seriously entertaining it.
By contrast, the core message of an “x-risk first” frame would be “if existential risks are plausible and soon, this is very bad and should be changed; you and your loved ones might literally die, and the things you value and worked on throughout your life might be destroyed, because of a small group of people doing some very reckless things with technology. It’s good and noble to try to make this not happen”.
I think a very important counterargument you don't mention is that, as with the Nanda and Alexander posts you mention, this paragraph, and hence the post overall importantly equivocates between 'x-risk' and 'global catastrophic risk'. You mention greater transparency of the label, but it's not particularly transparent to say 'get involved in the existential risk movement because we want to stop you and everyone you love from dying', and then say 'btw, that means only working on AI and occasionally 100% lethal airborne biopandemics, because we don't really care about nuclear war, great power conflict, less lethal pandemics/AI disasters, runaway climate change or other events that only kill 90% of people'.
I think focusing more on concrete ideas than philosophies is reasonable (though, following your second counterargument, I think it's desirable to try doing both in parallel for months or years rather than committing to either). But if we want to rebrand in this direction, I hope we'll either start focusing more on such 'minor' global catastrophes, or be more explicit (as David Nash suggested) about which causes we're actually prioritising, and to what extent. Either way, I don't think 'existential risk' is the appropriate terminology to use (I wrote more about why here).
This resonates. As a minor but pretty unambiguous example, I found it uncomfortable that at EAGL there was basically unlimited free wine. That seems like it would have cost enough to save at least a couple of AMF-lives, or allowed a few extra people to attend the conference, or whatever counterfactual seems apt, and it's hard to imagine what version of the 'it marginally improved my productivity' argument would be convincing.
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn't update that longtermism is bad? The first claim seems to be exactly what they think.
Scott:
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice... Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes
You could argue that he means 'socially promote good norms on the assumption that the singularity will lock in much of society's then-standard morality', but 'shape them by trying to make AI human-compatible' seems a much more plausible reading of the last sentence to me, given context of both longtermism.
Neel:
If you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA
He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as 'the core action relevant points of EA', since they certainly didn't come from the global poverty or animal welfare wings.
Also, at EAG London, Toby Ord estimated there were 'less than 10' people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who'd consider themselves longtermist is surely in the thousands.
My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time"
I don't think that's so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards 'longtermism tends to be harmful in practice'.
Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.
My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.
Great to see more independent actors moving into this space! Is there any interaction between MCF and Non-Linear Network?