Comments13


Sorted by Click to highlight new comments since:

I suspect the primary reasons you want to break up Deepmind from Google is to:

  1. Increase their autonomy, reducing pressure from google to race
  2. Reduce Deepmind's access to capital and compute, reducing their competitiveness

Perhaps that goes without saying, but I think it's worth explicitly mentioning. In a world without AI risk, I don't believe you would be citing various consumer harms to argue for a break up.

The traditional argument for breaking up companies and preventing mergers is to reduce the company's market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.

I think it's perfectly fine to argue for this, I just really want us to be explicit about it.

Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).

I read it as aiming to reduce AI risk by increasing the cost of scaling.

I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.

Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).

But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.

AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better?  If they they're already competing hard now, then it seems unlikely that they'll coordinate much on slowing down in the future.

Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.

I also think that it's far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.

An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to 'race', than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google's corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of 'AI'. 

There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.

I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here. 

But I'm not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.

Since this is tagged "Existential risk": What does this have to do with existential risk? Or is it not supposed to be about existential risk, not even indirectly? As far as I can tell, the article does not talk about existential risk. I could make my own guesses and association of this topic with existential risk, but I would prefer if this is spelled out.

Do you have a call to action here? Are you expecting that someone reading this on the forum has any ability to make it more (or less) likely to happen?

AI policy folks and research economists could engage with the arguments and the cited literature.

Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).

I broadly think it's cool to be raising novel (to me) possibilities like this, and I think you've done a good job of illustrating that it's not obviously out of line with existing practice. Thanks for writing it!

Minor formatting / typographical things: I think the image is misplaced from where the text refers to it. Also, weirdly, a lot of the single quotation marks in the text are duplicated?

At least from an AI risk perspective, it's not at all clear to me that this would improve things as it would lead to a further dispersion of this knowledge outward.

Executive summary: Regulators should review Google's acquisition of DeepMind in 2014 and their recent internal merger in 2023, and consider breaking up Google DeepMind due to concerns about market dominance, tax avoidance, public interest, consumer harm, and national security.

Key points:

  1. Google's acquisition of DeepMind in 2014 avoided regulatory scrutiny due to low revenues, despite its high value.
  2. The 2023 internal merger of DeepMind and Google Brain reduces competition and limits collaboration alternatives.
  3. Regulators can scrutinize the mergers on grounds of market dominance, tax avoidance, public interest concerns, consumer harm, and national security.
  4. Breaking up Google DeepMind raises questions about the UK's future in AI and its competition with China for AI supremacy.
  5. Historical cases like Bell Labs, Intel, and Microsoft provide insights into the potential consequences of breaking up Google DeepMind.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
TL;DR * Screwworm Free Future is a new group seeking support to advance work on eradicating the New World Screwworm in South America. * The New World Screwworm (C. hominivorax - literally "man-eater") causes extreme suffering to hundreds of millions of wild and domestic animals every year. * To date we’ve held private meetings with government officials, experts from the private sector, academics, and animal advocates. We believe that work on the NWS is valuable and we want to continue our research and begin lobbying. * Our analysis suggests we could prevent about 100 animals from experiencing an excruciating death per dollar donated, though this estimate has extreme uncertainty. * The screwworm “wall” in Panama has recently been breached, creating both an urgent need and an opportunity to address this problem. * We are seeking $15,000 to fund a part-time lead and could absorb up to $100,000 to build a full-time team, which would include a team lead and another full-time equivalent (FTE) role * We're also excited to speak to people who have a background in veterinary science/medicine, entomology, gene drives, as well as policy experts in Latin America. - please reach out if you know someone who fits this description!   Cochliomyia hominivorax delenda est Screwworm Free Future is a new group of volunteers who connected through Hive investigating the political and scientific barriers stopping South American governments from eradicating the New World Screwworm. In our shallow investigation, we have identified key bottlenecks, but we now need funding and people to take this investigation further, and begin lobbying. In this post, we will cover the following: * The current status of screwworms * Things that we have learnt in our research * What we want to do next * How you can help by funding or supporting or project   What’s the deal with the New World Screwworm? The New World Screwworm[1] is the leading cause of myiasis in Latin America. Myiasis “
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
 ·  · 4m read
 · 
As 2024 draws to a close, I’m reflecting on the work and stories that inspired me this year: those from the effective altruism community, those I found out about through EA-related channels, and those otherwise related to EA. I’ve appreciated the celebration of wins and successes over the past few years from @Shakeel Hashim's posts in 2022 and 2023. As @Lizka and @MaxDalton put very well in a post in 2022: > We often have high standards in effective altruism. This seems absolutely right: our work matters, so we must constantly strive to do better. > > But we think that it's really important that the effective altruism community celebrate successes: > > * If we focus too much on failures, we incentivize others/ourselves to minimize the risk of failure, and we will probably be too risk averse. > * We're humans: we're more motivated if we celebrate things that have gone well. Rather than attempting to write a comprehensive review of this year's successes and wins related to EA, I want to share what has personally moved me this year—progress that gave me hope, individual stories and acts of altruism, and work that I found thought-provoking or valuable. I’ve structured the sections below as prompts to invite your own reflection on the year, as I’d love to hear your responses in the comments. We all have different relationships with EA ideas and the community surrounding them, and I find it valuable that we can bring different perspectives and responses to questions like these. What progress in the world did you find exciting? * The launch of the Lead Exposure Elimination Fund this year was exciting to see, and the launch of the Partnership for a Lead-Free Future. The fund jointly committed over $100 million to combat lead exposure, compared to the $15 million in private funding that went toward lead exposure reduction in 2023. It’s encouraging to see lead poisoning receiving attention and funding after being relatively neglected. * The Open Wing Alliance repor