"The AI bubble is reaching a tipping point", says Sequoia Capital.

AI companies paid billions of dollars for top engineers, data centers, etc. Meanwhile, companies are running out of 'free' data to scrape online and facing lawsuits for the data they did scrape. Finally, the novelty of chatbots and image generators is wearing off for users, and fierce competition is leading to some product commoditisation. 

No major AI lab is making a profit yet (while downstream GPU providers do profit). That's not to say they won't make money eventually from automation.

It looks somewhat like the run-up of the Dotcom bubble. Companies then too were awash in investments (propped up by low interest rates), but most lacked a viable business strategy. Once the bubble burst, non-viable internet companies got filtered out.

Yet today, companies like Google and Microsoft use the internet to dominate the US economy. Their core businesses became cash cows, now allowing CEOs to throw money at AI as long as a vote-adjusted majority of stakeholders buys the growth story. That marks one difference with the Dotcom bubble. Anyway, here's the scenario:


How would your plans change if we saw an industry-wide crash? 

Let's say there is a brief window where:

  • Investments drop massively (eg. because the s-curve of innovation did flatten for generative AI, and further development cycles were needed to automate at a profit).
  • The public turns sour on generative AI (eg. because the fun factor wore off, and harms like disinformation, job insecurity, and pollution came to the surface).
  • Politicians are no longer interested in hearing the stories of AI tech CEOs and their lobbyists (eg. because political campaigns are not getting backed by the AI crowd).

Let's say it's the one big crash before major AI labs can break even for their parent companies (eg. because mass-manufacturing lowered hardware costs, real-time surveillance resolved the data bottleneck, and multi-domain-navigating robotics resolved inefficient learning).

Would you attempt any actions you would not otherwise have attempted? 
 

28

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

To me, it would make sense to use the lull in tech lobbying + popular dissent to lock in the general regulatory agenda that EA has been pushing for (ex. controls on large models, pre-harm enforcement, international treaties)

What are you thinking about in terms of pre-harm enforcement? 

I’m thinking about advising premarket approval – a requirement to scope model designs around prespecified uses and having independent auditors vet the safety tests and assessments.

Comments4
Sorted by Click to highlight new comments since:

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. 
It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Update: I now think this is 90%+ likely to happen (from original prediction date).

Update: reverting my forecast back to 80% chance likelihood for these reasons.

Igor Krawzcuk, an AI PhD researcher, just shared more specific predictions:

“I agree with ed that the next months are critical, and that the biggest players need to deliver. I think it will need to be plausible progress towards reasoning, as in planning, as in the type of stuff Prolog, SAT/SMT solvers etc. do.

I'm 80% certain that this literally can't be done efficiently with current LLM/RL techniques (last I looked at neural comb-opt vs solvers, it was _bad_), the only hope being the kitchen sink of scale, foundation models, solvers _and_ RL

If OpenAI/Anthropic/DeepMind can't deliver on promises of reasoning and planning (Q*, Strawberry, AlphaCode/AlphaProof etc.) in the coming months, or if they try to polish more turds into gold (e.g., coming out with GPT-Reasoner, but only for specific business domains) over the next year, then I would be surprised to see the investments last to make it happen in this AI summer.”
https://x.com/TheGermanPole/status/1826179777452994657

Curated and popular this week
Relevant opportunities