R

RedStateBlueState

884 karmaJoined

Comments
63

Yeah i guess that makes sense. But uh.... have other institutions actually made large efforts to preserve such info? Which institutions? Which info?

This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.

I don't think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.

Because of this, it is never "too soon" to order the regulation of AI. We may not know exactly what regulations would be like, but this is very unlikely to be written into law anyway. What we want right now is to create mechanisms to develop and enforce safety standards. Similar arguments apply to internal safety standards at companies developing AI capabilities.

It seems really hard for us to know exactly when AGI (or ASI or whatever you want to call it) is actually imminent. Even if it was possible, however, I just don't think last-minute panicking about AGI would actually accomplish much. It's all but impossible to quickly create societal consensus that the world is about to end before any harm has actually occurred. I feel like there's an unrealistic image of "we will panic and then everyone will agree to immediately stop AI research" implicit in this post. The smart thing to do is to develop mechanisms early and then use these mechanisms when we get closer to crunch time.

I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its "weird" premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between "doesn't rest on controversial claims" and "maximal impact".

Let me make the contrarian point here that you don't have to build AGI to get these benefits eventually. An alternative, much safer approach would be to stop AGI entirely and try to inflate human/biological intelligence with drugs or other biotech. Stopping AGI is unlikely to happen and this biological route would take a lot longer but it's worth bringing up in any argument about the risks vs. reward of AI.

I am nervous about wading into partisan politics with AI safety. I think there’s a chance that AI safety becomes super associated with one party due to a stunt like this, or worse becomes a laughing stock for both parties. Partisan politics is an incredibly adversarial environment, which I fear could undermine the currently unpolarized nature of AI safety.

Ooh, now this is interesting!

Running a candidate is one thing, actually getting coverage for this candidate is another. If we could get a candidate to actually make the debate stage in one of the parties that would be a big deal, but that would also be very hard. The one person who I can think who could actually get on the debate stage is Andrew Yang, if there ends up being a Democratic primary (which I am not at all sure about). If I recall he has actually talked about AI x-risk in the past? Even if that’s wrong, I know he has interacted with EA before, so it’s possible we could convince him to talk about it. He probably won’t make it his entire (or even main) platform though.

Without Andrew Yang on the debate stage, I’m not sure how much coverage we could really expect to get. I made a conscious effort not to pay attention to random non-debate candidates last election, so maybe others will have a better idea, but I think non-debate candidates got really low visibility. Still maybe more than nothing, but certainly not a big splash.

Ahh, I didn't read it as you talking about the effects of Eliezer's past outreach. I strongly buy "this time is different", and not just because of the salience of AI in tech. The type of media coverage we're getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we've ever seen before. We're reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to (GPT-4) and when we have facts like "a majority of AI researchers think p(AI killing humanity)>10%".

But even if you believe this time won't be different, I think we need to think critically about which world we would rather live in:

  • the current one, where AI Capabilities research keeps humming along with what seems to be inadequate AI Safety research and nobody outside of EA is really paying attention to AI Safety. All we can do is hope that AI risk isn't as plausible as Eliezer thinks and that Sam Altman is really careful.
  • One where there is another SOTA AI capabilities lab, maybe owned by the government, but AI is treated as a dangerous and scary technology that must be treated with care. We have more alignment research, the government keeps tabs on AI labs to make sure they're not doing anything stupid and maybe adds red tape that slow them down, and AI capabilities researchers everywhere don't do obviously stupid things.

Let's even think about the history here. Early Eliezer advocating for AGI to prevent nanotech from killing all of humanity was probably bad. But I am unconvinced that Eliezer's advocacy from afterwards up until 2015 or whatever was net-negative. My understanding is that though his work led to development of AI capabilities labs, there was nobody at the time working on alignment anyway. This reflex of "AI capabilities research bad" only holds if there is sufficient progress on ensuring AI safety in the meantime. 

One last note, on "power". Assuming Eliezer isn't horribly wrong about things, the worlds in which we survive AI are those where AI is widely acknowledged as extremely powerful. We're just not going to make it if policy-makers and/or tech people don't understand what they are dealing with here. Maybe there are reasons to delay this understanding a few years - I personally strongly oppose this - but let's be clear about this.

Not to be rude but this seems like a lot of worrying about nothing. "AI is powerful and uncontrollable and could kill all of humanity, like seriously" is not a complicated message. I'm actually quite scared if AI Safety people are hesitant to communicate because they think the misinterpretation will be as bad as you are saying here; this is a really strong assumption, an untested one at that, and the opportunity cost of not pursuing media coverage is enormous. 

The primary purpose of media coverage is to introduce the problem, not to immediately push for the solution. I stated ways that different actors taking the problem more seriously would lead to progress; I'm not sure that a delay is actually the main impact. On this last point, note that (as I expected when it was first released) the main effect of the FLI letter is that a lot more people have heard of AI Safety and people who have heard of it are taking it more seriously (the latter based largely on Twitter observations), not that a delay is actually being considered.

I don't actually know where you're getting "these issues in communication...historically have led to a lot of x-risk" from. There was no large public discussion about nuclear weapons before initial use (and afterwards we settled into the most reasonable approach there was for preventing nuclear war, namely MAD) or gain-of-function research. The track record of "tell people about problems and they become more concerned about these problems", on the other hand, is very good. 

(also: premature??? really???)

Load more