SE

Søren Elverlin

163 karmaJoined

Comments
19

The provided source doesn't show PauseAI affiliated people calling Sam Altman and Dario Amodei evil.

I do in fact believe that delaying AI by 5 years reduce existential risk by something like 10 percentage points. 

Probably this thread isn't the best place to hash it out, however. 

Another org in the same space, comprised of highly competent and experienced/plugged in people would certainly be welcome, and plausibly could be more effective. 

>PauseAI suffers from the same shortcomings most lobbying outfits do...

I'm confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate). 

This doesn't feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.  

I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out. 

As an example: If you don't think shrimp can suffer, then that's a strong argument against the Shrimp Welfare Project. However, that criticism doesn't belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.  

It's a long post, and it starts by talking about consciousness. 

Does it contain any response to the classic case for AI Risk, e.g. Bostrom's Superintelligence or Yudkowsky's List of Lethalities? 

Ajeya Cotra posted an essay on schlep in the context of AI 2 weeks ago: 
https://www.planned-obsolescence.org/scale-schlep-and-systems/

I find that many of the topics she suggests as 'schlep' are actually very exciting and lots of fun to work on. This is plausible why we see so much open source effort in the space of LLM-hacking. 

What would you think of as examples of schlep in other EA areas?

The 2016 Caplan-Yudkowsky debate (https://www.econlib.org/archives/2016/03/so_far_my_respo.html)  fizzled out, with Bryan not answering Eliezers last question. I'd like to know his answer 

Load more