The AI Revolution Is Already Losing Steam

cawacko

Well-known member
I know we have some tech folks and investors on here. Any of you dabble in the AI space? (San Francisco is the epicenter of AI growth and counting on it to lead our CRE recovery.) Being honest, I'm not well versed on it. So I don't know if this is accurate or we'll look back and laugh at this claim. But it's a huge potential driver for our economy and our gov't is having to wrestle with how to promote its growth while at the same time regulating it.



The AI Revolution Is Already Losing Steam

The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant


Nvidia reported eye-popping revenue last week. Elon Musk just said human-level artificial intelligence is coming next year. Big tech can’t seem to buy enough AI-powering chips. It sure seems like the AI hype train is just leaving the station, and we should all hop aboard.

But significant disappointment may be on the horizon, both in terms of what AI can do, and the returns it will generate for investors.


The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. New, competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work.

These factors raise questions about whether AI could become commoditized, about its potential to produce revenue and especially profits, and whether a new economy is actually being born. They also suggest that spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s—a boom that led to some of the biggest crashes of the first dot-com bubble.

The pace of improvement in AIs is slowing

Most of the measurable and qualitative improvements in today’s large language model AIs like OpenAI’s ChatGPT and Google’s Gemini—including their talents for writing and analysis—come down to shoving ever more data into them.

These models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.



To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models, says Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016.

AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.

Further evidence of the slowdown in improvement of AIs can be found in research showing that the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.

AI could become a commodity

A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance. At the same time, companies look for efficiencies, and whoever is winning shifts from who is in the lead to who can cut costs to the bone. The last major technology this happened with was electric vehicles, and now it appears to be happening to AI.

The commoditization of AI is one reason that Anshu Sharma, chief executive of data and AI-privacy startup Skyflow, and a former vice president at business-software giant Salesforce, thinks that the future for AI startups—like OpenAI and Anthropic—could be dim. While he’s optimistic that big companies like Microsoft and Google will be able to entice enough users to make their AI investments worthwhile, doing so will require spending vast amounts of money over a long period of time, leaving even the best-funded AI startups—with their comparatively paltry warchests—unable to compete.

This is happening already. Some AI startups have already run into turmoil, including Inflection AI—its co-founder and other employees decamped for Microsoft in March. The CEO of Stability AI, which built the popular image-generation AI tool Stable Diffusion, left abruptly in March. Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.

Today’s AI’s remain ruinously expensive to run

An oft-cited figure in arguments that we’re in an AI bubble is a calculation by Silicon Valley venture-capital firm Sequoia that the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue.

That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs.

Numbers are almost impossible to come by, and estimates vary widely, but the bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it. That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result. For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins.




In their most recent earnings reports, Google, Microsoft and others said their revenue from cloud services went up, which they attributed in part to those services powering other company’s AIs. But sustaining that revenue depends on other companies and startups getting enough value out of AI to justify continuing to fork over billions of dollars to train and run those systems. That brings us to the question of adoption.

Narrow use cases, slow adoption

A recent survey conducted by Microsoft and LinkedIn found that three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.

This suggests there is a massive gulf between the number of workers who are just playing with AI, and the subset who rely on it and pay for it. Microsoft’s AI Copilot, for example, costs $30 a month.

OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025.

That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation. The company’s recent demo of its voice-powered features led to a 22% one-day jump in mobile subscriptions, according to analytics firm Appfigures. This shows the company excels at generating interest and attention, but it’s unclear how many of those users will stick around.

Evidence suggests AI isn’t nearly the productivity booster it has been touted as, says Peter Cappelli, a professor of management at the University of Pennsylvania’s Wharton School. While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. He compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.



 
Back
Top