RN

richard_ngo

7075 karmaJoined

Bio

Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com

Sequences
2

Replacing Fear
EA Archives Reading List

Comments
313

If Hassan had said that more recently or I was convinced he still thought that, then I would agree he should not be invited to Manifest.

My claim is that the Manifest organizers should have the right to invite him even if he'd said that more recently. But appreciate you giving your perspective, since I did ask for that (just clarifying the "agree" part).

Having said that, given that there is a very clear non-genocidal reading, I do not think it is a clear example of hate speech in quite the same sense as Hanania's animals remark

I have some object-level views about the relative badness but my main claim is more that this isn't a productive type of analysis for a community to end up doing, partly because it's so inherently subjective, so I support drawing lines that help us not need to do this analysis (like "organizers are allowed to invite you either way").

Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?

Of course this is all a spectrum, but I don't believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn't be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.

I broadly endorse Jeff's comment above. To put it another way, though: I think many (but not all) of the arguments from the Kolmogorov complicity essay apply whether the statements which are taboo to question are true or false. As per the quote at the top of the essay:

"A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it. Scientists go looking for trouble."

That is: good scientists will try to break a wide range of conventional wisdom. When the conventional wisdom is true, then they will fail. But the process of trying to break the conventional wisdom may well get them in trouble either way, e.g. because people assume they're pushing an agenda rather than "just asking questions".

The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.

I agree that extreme truth-seeking can be counterproductive. But in most worlds I don't think that EA's impact comes from arguing for highly controversial ideas; and I'm not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.

One person I was thinking about when I wrote the post was Medhi Hassan. According to Wikipedia:

During a sermon delivered in 2009, quoting a verse of the Quran, Hasan used the terms "cattle" and "people of no intelligence" to describe non-believers. In another sermon, he used the term "animals" to describe non-Muslims.

Medhi has spoken several times at the Oxford Union and also in a recent public debate on antisemitism, so clearly he's not beyond the pale for many.

I personally also think that the "from the river to the sea" chant is pretty analogous to, say, white nationalist slogans. It does seem to have a complicated history, but in the wake of the October 7 attacks its association with Hamas should I think put it beyond the pale. Nevertheless, it has been defended by Rashida Tlaib. In general I am in favor of people being able to make arguments like hers, but I suspect that if Hanania were to make an argument for why a white nationalist slogan should be interpreted positively, it would be counted as a strong point against him.

I expect that either Hassan or Tlaib, were they interested in prediction markets, would have been treated in a similar way as Hanania by the Manifest organizers.

I don't have more examples off the top of my head because I try not to follow this type of politics too much. I would be pretty surprised if an hour of searching didn't turn up a bunch more though.

richard_ngo
121
43
22
5
3

I wasn't at Manifest, though I was at LessOnline beforehand. I strongly oppose attempts to police the attendee lists that conference organizers decide on. I think this type of policing makes it much harder to have a truth-seeking community. I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.

Why does enforcing deplatforming make truth-seeking so much harder? I think there are (at least) three important effects.

First is the one described in Scott's essay on Kolmogorov complicity. Selecting for people willing to always obey social taboos also selects hard against genuinely novel thinkers. But we don't need to take every idea a person has in board in order to get some value from them - we should rule thinkers in, not out.

Secondly, a point I made in this tweet: taboo topics tend to end up expanding, for structural reasons (you can easily appeal to taboos to win arguments). So over time it becomes more and more costly to quarantine specific topics.

Thirdly, it selects against people who are principled in defense of truth-seeking. My sense is that the people who organized Manifest are being very principled, and would also be willing to have left-wing people who have potentially-upsetting views. For example, there's been a lot of anti-semitism from prominent left-wing thinkers lately. If one of them wanted to attend Manifest, I think it would be reasonable for Jews to be upset. But I also expect that they'd be treated pretty similarly to Hanania (e.g. allowed to come and host sessions, name used in promotional materials). I'm curious what critics of Manifest think should be done in these cases.

To be clear, I'm not saying all events should take a stance like Manifest's. I'm just saying that I strongly support their right to do so.

Eh, I personally think of some things in the top 10 as "nowhere near" the most important issues, because of how heavy-tailed cause prioritization tends to be.

When you're weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: "a major consideration here is the use of AI to mitigate other x-risks". So I don't think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).

It follows from alignment/control/misuse/coordination not being (close to) solved.

"AGIs will be helping us on a lot of tasks", "collusion is hard" and "people will get more scared over time" aren't anywhere close to overcoming it imo.

These are what I mean by the vague intuitions.

I think it should be possible to formalise it, even

Nobody has come anywhere near doing this satisfactorily. The most obvious explanation is that they can't.

The issue is that both sides of the debate lack gears-level arguments. The ones you give in this post (like "all the doom flows through the tiniest crack in our defence") are more like vague intuitions; equally, on the other side, there are vague intuitions like "AGIs will be helping us on a lot of tasks" and "collusion is hard" and "people will get more scared over time" and so on. 

Load more