DA

David_Althaus

2296 karmaJoined

Comments
98

Thanks!

I haven't engaged much with the psychodynamic literature or mostly only indirectly (as some therapy modalities like CFT or ST are quite eclectic and thus reference various psychodynamic concepts) but perhaps @Clare_Diane has. Is there any specific construct, paper/book or test that you have in mind here? 

I'm not familiar with the SWAP but it looks very interesting (though Clare may know it), thanks for mentioning it! As you most likely know, there even exists a National Security Edition developed in collaboration with the US government. 

 

I just realized that in this (old) 80k podcast episode[1], Holden makes similar points and argues that aligned AI could be bad. 

My sense is that Holden alludes to both malevolence ("really bad values, [...] we shouldn't assume that person is going to end up being nice") and ideological fanaticism ("create minds that [...] stick to those beliefs and try to shape the world around those beliefs", [...] "This is the religion I follow. This is what I believe in. [...] And I am creating an AI to help me promote that religion, not to help me question it or revise it or make it better."). 

Longer quotes below (emphasis added): 

Holden: “The other part — if we do align the AI, we’re fine — I disagree with much more strongly. [...] if you just assume that you have a world of very capable AIs, that are doing exactly what humans want them to do, that’s very scary. [...]

Certainly, there’s the fact that because of the speed at which things move, you could end up with whoever kind of leads the way on AI, or is least cautious, having a lot of power — and that could be someone really badAnd I don’t think we should assume that just because that if you had some head of state that has really bad values, I don’t think we should assume that that person is going to end up being nice after they become wealthy, or powerful, or transhuman, or mind uploaded, or whatever — I don’t think there’s really any reason to think we should assume that.

And then I think there’s just a bunch of other things that, if things are moving fast, we could end up in a really bad state. Like, are we going to come up with decent frameworks for making sure that the digital minds are not mistreated? Are we going to come up with decent frameworks for how to ensure that as we get the ability to create whatever minds we want, we’re using that to create minds that help us seek the truth, instead of create minds that have whatever beliefs we want them to have, stick to those beliefs and try to shape the world around those beliefs? I think Carl Shulman put it as, “Are we going to have AI that makes us wiser or more powerfully insane?”

[...] I think even if we threw out the misalignment problem, we’d have a lot of work to do — and I think a lot of these issues are actually not getting enough attention.”

Rob Wiblin: Yeah. I think something that might be going on there is a bit of equivocation in the word “alignment.” You can imagine some people might mean by “creating an aligned AI,” it’s like an AI that goes and does what you tell it to — like a good employee or something. Whereas other people mean that it’s following the correct ideal values and behaviours, and is going to work to generate the best outcome. And these are really quite separate things, very far apart.

Holden Karnofsky: Yeah. Well, the second one, I just don’t even know if that’s a thing. I don’t even really know what it’s supposed to do. I mean, there’s something a little bit in between, which is like, you can have an AI that you ask it to do something, and it does what you would have told it to do if you had been more informed, and if you knew everything it knows. That’s the central idea of alignment that I tend to think of, but I think that still has all the problems I’m talking about. Just some humans seriously do intend to do things that are really nasty, and seriously do not intend — in any way, even if they knew more — to make the world as nice as we would like it to be.

And some humans really do intend and really do mean and really will want to say, you know, “Right now, I have these values” — let’s say, “This is the religion I follow. This is what I believe in. This is what I care about. And I am creating an AI to help me promote that religion, not to help me question it or revise it or make it better.” So yeah, I think that middle one does not make it safe. There might be some extreme versions, like, an AI that just figures out what’s objectively best for the world and does that or something. I’m just like, I don’t know why we would think that would even be a thing to aim for. That’s not the alignment problem that I’m interested in having solved.

  1. ^

    I'm one of those bad EAs who don't listen to all 80k episodes as soon as they come out. 

Thanks Mike. I agree that the alliance is fortunately rather loose in the sense that most of these countries share no ideology. (In fact, some of them should arguably be ideological enemies, e.g., Islamic theocrats in Iran and Maoist communists in China). 

But I worry that this alliance is held together by a hatred of (or ressentiment in general) Western secular democratic principles for ideological and (geo-)political reasons. Hatred can be an extremely powerful and unifying force. (Many political/ideological movements are arguably primarily defined, united, and motivated by what they hate, e.g., Nazism by the hatred of Jews, communism by the hatred of capitalists, racists hate other ethnicities, Democrats hate Trump and racists, Republicans hate the woke and communists, etc.)

So I worry that as long as Western democracies to influence international affairs, this alliance will continue to exist. And I certainly hope that Western democracies will continue to be powerful and worry that the world (and the future) will become a worse place if not. 

Another disagreement may be related to the tractability / how easy it is to contribute: 

For example, we mentioned above that the three ways totalitarian regimes have been brought down in the past are through war, resistance movements, and the deaths of dictators. Most of the people reading this article probably aren’t in a position to influence any of those forces (and even if they could, it would be seriously risky to do so, to say the least!).

Most EAs may not be able to directly work on these topics but there are various options that allow you to do something indirectly: 

- working in (foreign) policy or politics (or working on financial reforms that make illegal money laundering harder for autocratic states like Russia (again, cf. Autocracy Inc.). 
- becoming a journalist and writing about such topics (e.g., doing investigative journalism on the corruption in autocratic regimes), generally moving the discussion towards more important topics and away from currently trendy but less important topics
- working at think thanks that protect democratic institutions (Stephen Clare lists several)
- working on AI governance (e.g., info sec, export controls) to reduce autocratic regimes gaining access to AI. (Again, Stephen Clare already lists this area). 
- probably several more career paths that we haven't thought of

In general, it doesn't seem harder to have an impactful career in this area than in, say, AI risk. Depending on your background and skills, it may even be a lot easier; e.g., in order to do valuable work on AI policy, you often need to understand policy/politics and technical fields like computer science & machine learning. Of course, the area is arguably more crowded (though AI is becoming more crowded every day).

I just read Stephen Clare's 80k excellent article about the risks of stable totalitarianism

I've been interested in this area for some time (though my focus is somewhat different) and I'm really glad more people are working on this. 

In the article, Stephen puts the probability that a totalitarian regime will control the world indefinitely at about 1 in 30,000. My probability on a totalitarian regime controlling a non-trivial fraction of humanity's future is considerably higher (though I haven't thought much about this).

One point of disagreement may be the following. Stephen writes: 

There’s also the fact that the rise of a stable totalitarian superpower would be bad for everyone else in the world. That means that most other countries are strongly incentivized to work against this problem.

This is not clear to me. Stephen most likely understands the relevant topics way more than myself but I worry that autocratic regimes often seem to cooperate. This has happened historically—e.g., Nazi Germany, fascist Italy, and Imperial Japan—and also seems to be happening today. My sense is that Russia, China, Venezuela, Iran, and North Korea seem to have formed some type of loose alliance, at least to some extent (see also Anne Applebaum's Autocracy Inc.). Perhaps, this doesn't apply to strictly totalitarian regimes (though it did so for Germany, Italy and Japan in the 1940s). 

Autocratic regimes control a non-trivial fraction (like 20-25%?) of World GDP. A naive extrapolation could thus suggest that some type of coalition of autocratic regimes will control 20-25% of humanity's future (assuming these regimes won't reform themselves).

Depending on the offense-defense balance (and depending on how people trade off reducing suffering/injustive against other values such as national sovereignty, non-interference, isolationism, personal costs to themselves, etc.), this arrangement may very well persist. 

It's unclear how much suffering such regimes would create—perhaps there would be fairly little; e.g. in China, ignoring political prisoners, the Uyghurs, etc., most people are probably doing fairly well (though a lot of people in, say, Iran aren't doing too well, see more below). But it's not super unlikely there would exist enormous amounts of suffering.

So, even though I agree that it's very unlikely that a totalitarian regime will control all or even the majority of humanity's future, it seems considerably more likely to me (perhaps even more than 1%) that a totalitarian regime—or a regime that follows some type of fanatical ideology—will control a non-trivial fraction of the universe and cause astronomical amounts of suffering indefinitely. (E.g., religious fanatics often have extremely retributive tendencies and may value the suffering of dissidents or non-believers. In a pilot, I found that 22% of religious participants at least tentatively agreed with the statement "if hell didn't exist, we should create hell in order to punish all the sinners". Senior officials in Iran have ordered raping female prisoners so that they would end up in hell, or at least prevented from going to heaven (IHRDC, 2011; IranWire, 2023). One might argue that religious fanatics (with access to AGI) will surely change their irrational beliefs once it's clear they are wrong. Maybe. I don't find it implausible that at least some people (and especially religious or political fanatics) will decide that giving up their beliefs is the greatest possible evil and decide to use their AGIs to align reality with their beliefs, rather than vice versa.)

To be clear, all of this is much more important from a s-risk focused perspective than from an upside-focused perspective.

Thanks for this[1], I've been interested in this area for some time as well. 

Two organizations / researchers in this area that I'd like to highlight (and get others' views on) are Protect Democracy (the executive director is actually a GiveDirectly donor) and Lee Drutman—see e.g. his 2020 book Breaking the Two-Party Doom Loop: The Case for Multiparty Democracy in America. For a shorter summary, see Drutman's Vox piece (though Drutman has become less enthusiastic about ranked choice voting and more excited about fusion vorting). 

I'd be excited for someone to write up a really high-quality report on how to best reduce polarization / political dysfunction / democratic backsliding in the US and identify promising grants in this area (if anyone is interested, feel free to contact me as I'm potentially interested in making grants in this area (though I cannot promise anything, obviously)). 

 

  1. ^

    ETA (July 25th). Only managed to fully read the post now. I also think that the post is a little bit to partisan. My sense is that Trump and his supporters are clearly the main threat to US democracy and much worse than the Democrats/left. However, the Democrats/left also have some radicals, and some (parts of) cultural and elite institutions promote illiberal "woke" ideology and extreme identity politics (e.g., DiAngelo's white fragility) that gives fuel to Trump and his base (see e.g. Urban (2023), Hughes (2024) or Bowles (2024), McWhorter (2021)). I wish they would stop doing that. It's also not helpful to brand everyone who is concerned about illegal immigration and Islam as racist and Islamophobic. I think there are legitimate concerns to be had here (especially regarding radical Islam) and telling people that they are bigoted if they have any concerns will drive some of them towards Trump. 

Thanks.

I guess I agree with the gist of your comment. I'm very worried about extremist / fanatical ideologies but more on this below.

because every ideology is dangerous


I guess it depends on how you define "ideology". Let's  say "a system of ideas and ideals". Then it seems evident that some ideologies are less dangerous than others and some seem actually beneficial (e.g., secular humanism, the Enlightenment, or EA). (Arguably, the scientific method itself is an ideology.)

I'd argue that ideologies are dangerous if they are fanatical and extreme. The main characteristics of such fanatical ideologies include dogmatism (extreme irrationality and epistemic & moral certainty), having a dualistic/Manichean worldview that views in-group members as good and everyone who disagrees as irredeemably evil, advocating for the use of violence and unwillingness to compromise, blindly following authoritarian leaders or scriptures (which is necessary since debate, evidence and reason are not allowed), and promising utopia or heaven. Of course, all of this is a continuum. (There is much more that could be said here; I'm working on a post on the subject).

The reason why some autocratic rulers were no malevolent such as Marcus Aurelius, Atatürk, and others is because they followed no ideology. [...] Stoicism was a physicalist philosophy, a realist belief system.

Sounds like an ideology to me but ok. :)

 

Yes, I think investigative journalism (and especially Kelsey Piper's work on Altman & OpenAI) is immensely valuable. 

In general, I've become more pessimistic about technology-centric/ "galaxy-brained" interventions in this area and more optimistic about "down-to-earth" interventions like, for example, investigative journalism, encouraging whistleblowing (e.g. setting up prizes or funding legal costs), or perhaps psychoeducation / workshops on how to detect malevolent traits and what do when this happens (which requires, in part, courage / the ability to endure social conflict and being socially savvy, arguably not something that most EAs excel in). 

I'm excited about work in this area. 

Somewhat related may also be this recent paper by Costello and colleagues who found that engaging in a dialogue with GPT-4 stably decreased conspiracy beliefs (HT Lucius). 

Perhaps social scientists can help with research on how to best design LLMs to improve people's epistemics; or to make sure that interacting with LLMs at least doesn't worsen people's epistemics. 

Great comment. 

Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud".

I agree with your analysis but I think Will also sets up a false dichotomy. One's inability to conceptualize or realize that one's actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the "high integrity to really bad continuum", you have morally scrupulous people who constantly wonder whether their actions are wrong. On the other end of the continuum, you have pathological narcissists whose self-image/internal monologue is so out of whack with reality that they cannot even conceive of themselves doing anything wrong. That doesn't make them great people. If anything, it makes them more scary.

Generally, the internal monologue of the most dangerous types of terrible people (think Hitler, Stalin, Mao, etc.) doesn't go like "I'm so evil and just love to hurt everyone, hahahaha". My best guess is, that in most cases, it goes more like "I'm the messiah, I'm so great and I'm the only one who can save the world. Everyone who disagrees with me is stupid and/or evil and I have every right to get rid of them." [1]

Of course, there are people whose internal monologues are more straightforwardly evil/selfish (though even here lots of self-delusion is probably going on) but they usually end up being serial killers or the like, not running countries. 

Also, later when Will talks about bad applies, he mentions that “typical cases of fraud [come] from people who are very successful, actually very well admired”, which again suggests that "bad apples" are not very successful or not very well admired. Well, again, many terrible people were extremely successful and admired. Like, you know, Hitler, Stalin, Mao, etc. 

Nor am I implying that improved governance is not a part of the solution.

Yep, I agree. In fact, the whole character vs. governance thing seems like another false dichotomy to me. You want to have good governance structures but the people in relevant positions of influence should also know a little bit about how to evaluate character. 

  1. ^

    In general, bad character is compatible with genuine moral convictions. Hitler, for example, was vegetarian for moral reasons and “used vivid and gruesome descriptions of animal suffering and slaughter at the dinner table to try to dissuade his colleagues from eating meat”. (Fraudster/bad apple vs. person with genuine convictions is another false dichotomy that people keep setting up.)

Load more