Ajeya Cotra posted an essay on schlep in the context of AI 2 weeks ago:
https://www.planned-obsolescence.org/scale-schlep-and-systems/
I find that many of the topics she suggests as 'schlep' are actually very exciting and lots of fun to work on. This is plausible why we see so much open source effort in the space of LLM-hacking.
What would you think of as examples of schlep in other EA areas?
The Budapest Memorandum provided security assurances, not security guarantees. And I believe this war has already caused enough damage to Russia that we can't talk about Russia "getting away with" the invasion.
The destruction of the Russian military should be expected to make the world safer primarily because it will prevent future Russian agression.
A single data point: At a party at EAG, I met a developer who worked at Anthropic. I asked for his p(DOOM), and he said 50%. He told me he was working on AI capability.
I inquired politely about his views on AI safety, and he frankly did not seem to have given the subject much thought. I do not recall making any joke about "selling out", but I may have asked what effect he thought his actions would have on X-risk.
I don't recall anyone listening, so this was probably not the situation OP is referring to.
The current norm is that people have a right to not engage with a subject. It looks to me like this post disagrees with this norm. I base this on the following quotes:
Bostrom: It is not my area of expertise, and I don’t have any particular interest in the question. I would leave to others...
pseudonym: ...this reflects terribly on Nick...
It's a long post, and it starts by talking about consciousness.
Does it contain any response to the classic case for AI Risk, e.g. Bostrom's Superintelligence or Yudkowsky's List of Lethalities?