Skip to Content

AI is hitting a wall just as the hype around it reaches the stratosphere

Analysis by Allison Morrow, CNN

New York (CNN) — It’s been two years since OpenAI bestowed ChatGPT upon the world and set off a kind of gold rush for artificial intelligence. Billions of dollars are pouring into AI-focused and AI-adjacent companies on the promise that the technology is going to accelerate (or possibly obliterate) every aspect of modern life.

The narrative from Silicon Valley is that the AI train has left the station and any smart investor had better hop on before these products become “superintelligent” and start solving all the world’s problems. (To the true believers, that’s not hyperbole, and those expectations have helped turn once-niche companies like chipmaker Nvidia, which reports earnings on Wednesday, into some of the most valuable assets on the planet.)

Of course, the key to that narrative is the promise that large language models (LLMs) like ChatGPT keep improving at an exponential rate.

Some AI skeptics have warned for years about “scaling laws” — the idea that you can continually improve a model’s output just by throwing more data and computing power at it. These aren’t so much laws as they are educated guesses, though. And the truth is, even the scientists who build LLMs don’t fully understand how they work.

Now, some of the leading language models appear to be hitting a wall, according to at least three reports last week. See here:

  • Tech news outlet The Information cited unnamed OpenAI employees who said that some of the company’s researchers believe its next flagship AI model, Orion, “isn’t reliably better than its predecessor in handling certain tasks.”
  • Bloomberg reported, also citing two unnamed sources, that Orion “fell short” and is “so far not considered to be as big a step up from OpenAI’s existing models as GPT-4 was from GPT-3.5.”
  • Reuters reports that “researchers at major AI labs have been running into delays and disappointing outcomes in the race to release a large language model that outperforms OpenAI’s GPT-4 model, which is nearly two years old, according to three sources familiar with private matters.”
  • It also quoted Ilya Sutskever, an OpenAI co-founder who left the company in May. “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said. “Scaling the right thing matters more now than ever.”
  • Even Marc Andreessen, the venture capitalist who once penned an essay titled “Why AI Will Save the World,” recently said on a podcast that the available models are “kind of hitting the same ceiling on capabilities.”

OpenAI CEO Sam Altman appeared to push back on the reports, posting on X last week that “there is no wall.”

Whether it’s a wall or a mountain or a plateau or whatever you want to call it, even AI bulls say it’s possible we’ve reached a turning point just based on the products that have been released.

“We haven’t seen a breakthrough model in a while,” Gil Luria, managing director at investment group D.A. Davidson, told me. “Part of it is that we’ve exhausted all the human data, and so just throwing more compute at the same data may not yield better results.”

This is a slightly technical problem but one that’s important to understand about the models’ limitations: To make these AI machines sound human, you have to train them with human data — essentially using every piece of text or audio on the internet. Once a model ingests all of that, there’s nothing “real” left to train it on.

To be clear, a plateau isn’t necessarily a death knell for the AI business. But it’s certainly not great for optics when Wall Street is already on edge over when (or whether) investors can expect to see these shiny, very expensive products produce some actual revenue.

Nvidia, the go-to chipmaker that’s valued at nearly $3.5 trillion, and other major AI players likely won’t have to worry about the scaling issue right away.

“In terms of demand for Nvidia for last quarter and this quarter, there’s no doubt that that demand was more than they could supply,” Luria said.

But if we have indeed hit a scaling wall, “it may mean that the the mega-cap technology companies have over-invested” and it’s possible that they could scale back in the near future.

That’s the AI optimist/pragmatist view.

For a less rosy outlook, I turned to Gary Marcus, NYU professor emeritus and outspoken critic of AI hype.

“The economics are likely to be grim,” Marcus wrote earlier this month on his Substack. “LLMs will not disappear, even if improvements diminish, but the economics will likely never make sense… When everyone realizes this, the financial bubble may burst quickly; even Nvidia might take a hit, when people realize the extent to which its valuation was based on a false premise.”

The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN-Opinion

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KIFI Local News 8 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content