SE

Søren Elverlin

142 karmaJoined

Comments
13

It's a long post, and it starts by talking about consciousness. 

Does it contain any response to the classic case for AI Risk, e.g. Bostrom's Superintelligence or Yudkowsky's List of Lethalities? 

Ajeya Cotra posted an essay on schlep in the context of AI 2 weeks ago: 
https://www.planned-obsolescence.org/scale-schlep-and-systems/

I find that many of the topics she suggests as 'schlep' are actually very exciting and lots of fun to work on. This is plausible why we see so much open source effort in the space of LLM-hacking. 

What would you think of as examples of schlep in other EA areas?

The 2016 Caplan-Yudkowsky debate (https://www.econlib.org/archives/2016/03/so_far_my_respo.html)  fizzled out, with Bryan not answering Eliezers last question. I'd like to know his answer 

The Budapest Memorandum provided security assurances, not security guarantees. And I believe this war has already caused enough damage to Russia that we can't talk about Russia "getting away with" the invasion.

The destruction of the Russian military should be expected to make the world safer primarily because it will prevent future Russian agression. 

The police is not bound by the "No drama" rule. If you steal money, you can expect the police to be "dramatic" about it. 

A single data point: At a party at EAG, I met a developer who worked at Anthropic. I asked for his p(DOOM), and he said 50%. He told me he was working on AI capability. 

I inquired politely about his views on AI safety, and he frankly did not seem to have given the subject much thought. I do not recall making any joke about "selling out", but I may have asked what effect he thought his actions would have on X-risk. 

I don't recall anyone listening, so this was probably not the situation OP is referring to. 

I appreciate cultural works creating common knowledge that the AGI labs are behaving strongly unethically. 

As for the specific scenario, point 17 seems to be contradicted by the orthogonality thesis / lack of moral realism.

The current norm is that people have a right to not engage with a subject. It looks to me like this post disagrees with this norm. I base this on the following quotes:

Bostrom: It is not my area of expertise, and I don’t have any particular interest in the question. I would leave to others...
pseudonym: ...this reflects terribly on Nick...

Bostrom is claiming the right to not engage with a subject, and this right is a necessity to have an inclusive EA-movement.

I downvoted this post because it is propagating a norm that I expect to be damaging. 

Load more