Hide table of contents

"Die Zeit", a major german newspaper that features long-format articles, published a piece on Effective Altruism and Open Philanthropy on 14th of March 2024. I think that it is of great interest to people involved with catastrophic risks. The article is very critical of current efforts.

The text, originally written by Nicolas Killian in German, has been translated and slightly edited by myself. I provide a thorough analysis of the text in the second section.

You can find the original text here

The translated text

The Facebook-co-founder Dustin Moskovitz fears that AI will destroy humanity. Now, he is investing millions to stop it. What can he do?

A year and a half ago, the first $5 million hit his account. Dan Hendrycks could get to work: against the end of the world. He hired researchers, set up consulting services for politicians and entrepreneurs, and since then has been thinking full-time about averting his personal horror: that an artificial superintelligence could one day get out of control and destroy humanity.

Hendrycks is a computer scientist and only 28 years old. Thanks to a generous endowment, he now runs his own research institute, the Center for AI Safety, which employs about a dozen people in San Francisco. Hendrycks previously studied and researched at the University of Berkeley as an expert on how to teach machines morality.

Even in college, Hendrycks talked about how AI systems could develop bioweapons against humans, how robots gone wild could turn on their creators, and how war machines could attack their masters. 
"They thought I was crazy" says the director of the institute. But the man who provides all the money doesn't think so: Dustin Moskovitz, 39. 

A man who co-founded Facebook with Mark Zuckerberg and is now a billionaire entrepreneur and philanthropist living in the San Francisco area. Moskovitz and Hendricks have never met, only exchanged a few chat messages. Apparently, that was enough for a few million dollars in funding.
It is well known in the tech community that Moskovitz likes to stay in the background and let his money do the talking. He rarely gives interviews - even requests from Die Zeit went unanswered.

Via his foundation, he spent over 350 million dollars already.

But you can still learn a lot about Moskovitz anyway. And you have to if you want to understand a crucial development in Silicon Valley. Where the majority of the world's most important AI companies are based, a powerful anti-AI movement is also forming. 
The co-founder of Facebook is its banker. Through his foundation, called Open Philanthropy, he has already spent more than $350 million and created a global network of research institutes, think tanks and advocacy groups. 
They are called Future of Life, Future of Humanity, or Existential Risk Initiative. They are often affiliated with elite universities. 

According to Forbes, Dustin Moskovitz is worth more than $17 billion. However, in his mid-20s, he declared that he did not want to keep the money: He was going to give it away, not just like that, but with maximum charitable impact. 
Moskovitz had become a follower of the "Effective Altruists" (EA), a then-nascent movement that quickly found many adherents in Silicon Valley. It is a technocratic view of money and morality taken to the extreme. Effective altruists believe that it is possible to measure the charitable impact of every dollar donated. And that you should be guided by it.

In 2011, Moskovitz began giving for malaria nets and better animal treatment. But a few years later, like many people in the EA movement, he changed his mind. They no longer saw the most effective use of their money in supporting individual relief efforts, but rather in averting a supposedly imminent end of the world. After all, what is the point of fighting epidemics if killer robots will soon wipe out humanity?

As a result, two opposing movements have been emanating from Silicon Valley since around 2015. On the one hand, AI companies are booming there. In San Francisco, for example, the company OpenAI was founded with the goal of creating a human-like AI and has since come a long way. 

With their ChatGPT program, you can now converse almost as normally as with a human. And the recently launched Sora system can create mini-movies from just a few words of instructions. All the major Internet companies are now working on similarly powerful AI systems.

But at the same time, the effective altruists began their counter-movement. Berkeley computer scientist Dan Hendrycks, for example, was co-funded by the Moskovitz Open Philanthropy Foundation during his doctoral studies. And at his new institute, he will once again attract young talent to the cause area of stopping the end of the world: with great research opportunities and financial support. 
Every student who takes an online course at Hendryck's institute, in addition to their own studies, receives $500 from Moskovitz and the institute's other donors. Those who choose to write relevant research papers receive up to $2,000 per semester. 
Over time, this has had an effect. When OpenAI unveiled the most powerful version of its chat program to date, GPT-4, in March 2023, hundreds of people from the AI community signed a warning about the dangers of unleashing intelligence technology. Hendrycks had drafted the template. "The risk of being wiped out by AI should become a global priority". The media reported worldwide. EU Commission President Ursula von der Leyen quoted from the appeal to the EU Parliament.
 

But criticism of the effective altruists is also growing louder, especially in the academic world. "These doomsday scenarios are absurd," wrote Yann LeCun, Facebooks leading AI researcher. "It's become like a cult," commented Harvard psychologist Steven Pinker. Sometimes AI researchers speak out, fearing that the warnings of the EA crowd will lead to overly strict rules. This could, for example, hinder the development of new drugs. Jessica Heesen, a technology philosopher at the University of Tübingen, says: "Politicians should focus on today's risks and problems and not listen to people who have read too many science fiction novels". 

But talking to politicians and exerting influence is what the EA movement does very professionally. Or should we say effectively? Those who take courses or otherwise make a name for themselves among the effective altruists also get tips on how to make a career as a policy advisor and advice on job openings in the US Senate, the United Nations, or the EU Commission. 

In Washington, it is easy to see how determined the Moskovitz organization is. In 2021, Open Philanthropy launched a program for fellows to be placed directly in the offices of key senators and key agencies in Washington. Up to $120,000 a year in salary, paid by the Moskovitz Foundation.

The job? To help draft legislation and other regulations on technology issues, according to the Open Philanthropy website.

In Washington, it has long been part of the lobbying business to place like-minded or trusted individuals directly in the engine rooms of power. Open Philanthropy is also quite open about this. It supports lobbying because no one else cares about these issues, or because the field would otherwise be left to representatives of large corporations. And there is a lot at stake in AI right now. From Washington to London to Brussels, legislation to regulate the technology is being debated. In Brussels, the new AI act is due to become binding in the next few days.

However, it would be all too easy to say that the AI makers are on one side of this debate, or that the opponents of the technology and its billionaire apocalyptic, Moskovitz, are on the other. Even Moskovitz himself is apparently torn. "There are so many cool things I can do [with AI]," he gushed in a podcast last year. 
And some of the people involved with him in the EA movement come from AI companies themselves. Some don't really trust their own creations. Others believe that as technology experts, they should also be involved in regulation.

The story of industry leader OpenAI shows how complicated the issue is. Its founders - including a man named Sam Altman and Tesla CEO Elon Musk - initially wanted to work for the public good. The goal was to avert bleak future scenarios. Now, however, a commercial offshoot of OpenAI is working closely with Microsoft and making billions. Elon Musk, who left OpenAI in 2018, therefore sued the company a few days ago.

One puzzling detail: the Open Philanthropy Foundation itself donated $30 million to OpenAI in 2017. This apparently bought it influence. Since then, people from the AI movement have influenced company policy on several occasions. For example, on safety. In November 2023, Open AI CEO Altman was temporarily fired - by members of the board who are said to be close to the effective altruists. 

However, the movement's greatest hope is to systematically influence legislation. As the EU's AI law is due to be voted on this week, it is becoming clear that the Moskovitz people have also been involved, with the main features of the law having been agreed in Brussels last December, and bans on entire areas of AI being envisaged. For example, AI systems that categorize people according to their political, religious or sexual inclinations. The EU is creating the most far-reaching AI law in the world to date.

The effective altruists in Brussels have invested heavily in advance. For several years, the EU has had a scholarship program similar to the one in the USA. Young people are supposed to work for think tanks and sensitize politicians in Brussels and other EU capitals. The name of the organization sounds like a promise of salvation: Training for Good. The money comes by way of Open Philanthropy. Scholarship holders receive about 2,000 euros a month.

Their warnings perfectly fit stereotypes from science-fiction.

Daniel Leufer, who works in Brussels as an AI expert for the human rights organization AccessNow, has been watching the EA people for some time. In 2021, when the negotiations for the AI law started, some of their organizations came to Brussels, he says. This is evidenced by the Future of Life Institute, which has already received support in the US from industry giants such as Elon Musk, Skype founder Jan Tallinn and crypto-billionaire Vitalik Buterin. The Moskovitz-Foundation granted the institute 2 million dollars.

FLI quickly established an excellent network in Brussels. Its lobbyists attended hearings in the EU Parliament, met with MEPs and organized workshops. They have been involved in influential working groups that set technical standards for AI products in the EU. They met with EU Competition Commissioner Margrethe Vestager, as documented by a picture on LinkedIn.

They had an easy time with the media and politicians. Their warnings fit perfectly with the stereotypes of science fiction literature. "Anyone can easily imagine it," says Leufer. "Unfortunately, these organizations managed to steer the debate on AI in the wrong direction". He would have preferred to deal with more timely issues than the great science fiction apocalypse. The dangers of AI-enabled government surveillance, for example, facial recognition in law enforcement, or AI-generated fake news in election campaigns. Other Brussels insiders, such as Kai Zenner, digital policy advisor to MEP Axel Voss, also say that the influence of the effective altruists has led to "other important issues being neglected". 

If you ask Dan Hendricks and the Open Philanthropy Foundation, you won't get much disagreement. We need to address both, they say. Today's problems and potentially catastrophic risks. The foundation's spokesperson adds that Open Philanthropy also supports organizations that do not share its concerns about disasters.

But the same cannot be said for the Future of Life Institute in Brussels. A friendly young man appears in the video tile. Mark Brakel, chief lobbyist and former Dutch diplomat, immediately starts talking about an AI arms race. About how easy it will be to produce bioweapons with AI in the future. And he suggests a thought experiment.

What would happen if an artificial superintelligence were given the task of stopping climate change? It might decide that it needs to destroy humanity to solve the problem. Rather unlikely, he says, laughing, but still something to plan for. 

Analysis

Overall, I find the article to be well-written and engaging. Nicolas Killian has obviously put some time into researching how Open Philantropy is connected with key grantees and think tanks. Although I assume the article to be largely factually correct, it is gravely misleading in some regards.

 I suppose that the article imprints factually wrong beliefs regarding effective altruism and AI safety advocates upon readers. I discuss two causes of this later on, but first analyze concrete statements made throughout the text: 

The start: "Moskovitz - the crazy apocalyptic"

"Even in college, Hendrycks talked about how AI systems could develop bioweapons against humans, how robots gone wild could turn on their creators, and how war machines could attack their masters."

I think that the statement above mixes three problems that are not strongly interrelated: AGI, autonomous weapons, biosecurity. It is easy to accidentally mix them, particularly as the FLI works on regulating AI and autonomous weapons. To the best of my knowledge, the notion of AGI acting via killer robots has been deemed laughable by most serious researchers. Clearly, Killian paints the cause as dystopian with words such as: "personal horror","apocalypse", "destroy humanity". 

This notion is strengthened by a reportedly low threshold for funding, which signals crazy willingness-to-pay for these efforts: "Moskovitz and Hendricks have never met, only exchanged a few chat messages. Apparently, that was enough for a few million dollars in funding."

The author furthermore displays skepticism regarding ideas such as utilitarianism and the ability to measure the effects of interventions: "Effective altruists believe that it is possible to measure the charitable impact of every dollar donated. And that you should be guided by it.

Many Die Zeit readers will perhaps find the ability to measure the charitable impact of every dollar quite alien. However, if there were no measurable differences between the expected value of  donations, there would be absolutely no (expected) difference between different charities at all. This is clearly a strange proposition at best, and at worst a misleading one.

An outright wrong part of the article is: 
 " [...] averting a supposedly imminent end of the world. After all, what is the point of fighting epidemics if killer robots will soon wipe out humanity?"
Clearly, no sane person would describe the "end of the world" as imminent. Most scholarly estimates of aggregated catastrophic risk are well below 1% per annum. Furthermore, no sane AI researcher would suggest that a major threat comes in the form of killer robots - this is some sci-fi bullshit and clearly poorly researched. 

The main part: "Moskovitz - the crazy influencer"

... mainly details funding decisions. They are portrayed such as to make Dustin Moskovitz look like a centerpiece of coordination in a anti-doomsday cult. 

"However, the movement's greatest hope is to systematically influence legislation. As the EU's AI law is due to be voted on this week, it is becoming clear that the Moskovitz people have also been involved, [...]"

I very much doubt that anyone is a centerpiece of coordination in such a messy affair as AI and technology. The article mentions connections of Open Philanthropy to Elon Musk and Ursula von der Leyen, which (no matter how distant) clearly connect to the average readers background, increasing the feeling that Open Philanthropy may be a force that moves global policy from the backseat. 

The ending 

... mainly repeats the beginning. It thus strengthens the conclusions the reader made throughout the first two sections and the skepticism that has been built up. This is topped off by Mark Brakel apparently going bonkers on dystopian nonsense (from the readers perspective![1]):

"Mark Brakel, chief lobbyist and former Dutch diplomat, immediately starts talking about an AI arms race." (emphasis added) 

Why is this bullshit?

1. Financial incentives 

Clearly, any journalist is under immense financial pressure to publish pieces that are being read. This article is more clickbait, more drama, more gossip and less research than it should be because that hooks readers. 

2. Lack of understanding

The author did research funding networks well. But he did not understand  anything about these decisions. This is plainly obvious throughout the text. 

  • The author does not think that the effects of interventions can be measured. But entire fields of academic study such as development economics and causal inference are devoted to this (mainly informing Open Philanthropy decisions).
  • The author clearly has not sought out research on catastrophic risk. Whilst catastrophic risk seems to fit narratives from science fiction on the very surface, rigorous study (as detailed in e.g. Superintelligence) has revealed that the issue is far more complicated and nuanced. The author is signalling his very lack of basic knowledge regarding the topic he writes about by suggesting analogies to science-fiction works. 

What can be learned?

I leave this question open for you. Tell us in the comments what you think!

  1. ^

    I happened to have met Mark Brakel briefly, and he seemed like a experienced guy to me. I think I can understand what he has been trying to say from the context that the article gives- but I guess most readers cannot.

13

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: A major German newspaper article criticizes the Effective Altruism movement and Open Philanthropy's efforts to mitigate catastrophic AI risks, portraying them as misguided and overly influential, but the article itself contains misleading claims and lacks nuance.

Key points:

  1. The article profiles Dustin Moskovitz's funding of AI safety research and policy efforts through Open Philanthropy, portraying it as an influential "anti-AI movement".
  2. It mixes distinct issues like AGI, autonomous weapons, and biosecurity, and uses sensationalist language to paint AI risk concerns as "dystopian" and "crazy".
  3. The article is skeptical of the core tenets of Effective Altruism, like measuring charitable impact, without substantive engagement.
  4. It misleadingly portrays AI risk as an "imminent" threat and repeats the misconception that "killer robots" are the main concern.
  5. The article lacks understanding of the relevant research and academic fields informing Open Philanthropy's funding decisions.
  6. incentives for click and the author's lack of subject matter expertise likely contributed to the article's shortcomings.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities