CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6542 karmaJoined Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1072

I don’t know the exact dates, but: a)proof-based methods seem to be receiving a lot of attention b) def/acc is becoming more of a thing c) more focus on concentration of power risk (tbh, while there are real risks here, I suspect most work here is net-negative)

It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.

That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.

Nonetheless, this seems like a useful concept for thinking about what the future might look like.

And because microaggression and internalised racism (MIR) may come across as “culture war” loaded terms (despite them also being academic terms)

 

You seem to be assuming that just because something is an academic term that it isn't culture war loaded, despite the fact that some of these fields don't actually see objectivity as having any value.

(I actually upvoted this post because it is very well written and I appreciate you taking all of this time to define a key term).

I'm not a fan of this negativity. Why not be grateful for all the money he's donated to the Bill and Melinda Gates Foundation instead?

Fascinating, I can't believe I've never heard this argument before.

  1. What do you think was the best point that Titotal made?
  2. I'm not saying it can't be questioned. And there wasn't a rule that you couldn't discuss it as part of the AI welfare week. That said, what's wrong with taking a week's break from the usual discussions that we have here to focus on something else? To take the discussion in new directions? A week is not that long.

I think it’s very valuable for you to state what the proposition would mean in concrete terms.

On the other hand, I think it’s quite reasonable for posts not spend time engaging with the question of whether “there will be vast numbers of AIs that are smarter than us”.

AI safety is already one of the main cause areas here and there’s been plenty of discussion about these kinds of points already.

If someone has something new to say on that topic, then it’d be great for them to share it, otherwise it makes sense for people to focus on discussing the parts of the topic that have not already been covered as part of the discussions on AI safety.

I’m pretty bullish on having these kinds of debates. While EA is doing well at having an impact in the world, the forum has started to feel intellectually stagnant in some ways. And I guess I feel that these debates provide a way to move the community forward intellectually. That's something I've been feeling has been missing for a while.

You wrote that governance is more important than technical research. Have you considered technical work that supports governance? The AI Safety Fundamentals course has a week on this.

In any case, working in AI or AI safety would increase your credibility for any activism that you decide to engage in.

Exciting news! I don't know whether we should prioritise Digital Consciousness, but I think it's important for there to be de-confusion work happening in this space.

Load more