AI’s double bubble trouble
Financial Times, 16 oct 2025
In his Financial Times column on the AI boom, John Thornhill quotes our live interview with physicist and entrepreneur Stephen Wolfram.
Stock market bears are on the prowl again growling about the dangers of a tech bubble. Ursine analysts at the IMF, the Bank of England, Goldman Sachs, JPMorgan Chase and Citi are all warning that valuations are surging to levels not seen since the dotcom crash 25 years ago. The implicit message is that AI is overhyped.
The reaction of bullish west coast tech bros has been to shrug and carry on investing, drawing a distinction between a “good” industrial investment bubble and a “bad” speculative financial bubble. But the strong likelihood is that we are experiencing both.
One of the most articulate advocates of the good bubble theory is Google’s former boss Eric Schmidt. “Bubbles are great. May the bubbles continue,” he told me at the Sifted summit last week. Their historical function has been to redirect masses of capital into frontier technology and infrastructure, which is good for the world. But the latest technological transformation comes with a novel twist: AI will one day far exceed the cognitive capabilities of humans. “I think it’s underhyped, not overhyped. And I look forward to being proven correct in five or ten years.”
Valuations might appear overblown. But what, Schmidt asked as a thought experiment, would happen if a tech company attained artificial general intelligence (AGI) and then superintelligence? Such technology would exceed the sum of human knowledge and then solve the world’s hardest problems. “What’s the value of that company? It’s a very, very large number. Much larger than any other company in history, forever, probably.”
For sure, some companies over-invest in infrastructure in bubbly times and go bust, as was the case with Global Crossing, which built out telecoms infrastructure in the 1990s, and the Channel Tunnel, which needed to be recapitalised twice. But Schmidt dismissed the idea that the same would happen to financially robust AI companies today. “The people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?” he said. “These people are not stupid.”
Even if one accepts this argument, it is hard to ignore the financial bubble frothing up on top of the industrial investment boom. Is the privately owned OpenAI, on course to burn $8.5bn of cash this year, worth $500bn (unless it does attain AGI)? Does it make sense that the data and AI company Palantir is trading on a forward price/earnings multiple of 225 (the highest valuation of any S&P 500 company)? At best, such valuations embrace defiantly heroic assumptions about the long-term earning power of these companies. At worst, they resemble expensive lottery tickets on the future.
Short selling hedge funds are already on crash watch, but for the moment they are being badly burned given the strength of the market. Some fund managers are closely tracking bitcoin miners, which are rapidly pivoting into AI computing services. They suggest that those, like Cipher Mining and Terawulf that are largely financing their expansion through debt and convertible notes, may be vulnerable to any downswings in demand. The collapse of auto supplier First Brands Group is already causing jitters in private credit markets.
Even some in the tech industry — including Bret Taylor, OpenAI’s chair — draw parallels with the dotcom bubble and think valuations may have run ahead of reality. Taylor recently told The Verge that AI would transform the economy and create huge value in the future. But he added: “I think we’re also in a bubble, and a lot of people will lose a lot of money.”
When asked this week whether we were in an AI bubble, the scientist and entrepreneur Stephen Wolfram said: “The answer is obviously yes.” And what did he make of all the talk about AGI? “It’s a meaningless thing,” he said, during an onstage interview at the London Institute for Mathematical Sciences. Naming a technology after an ambition rather than a reality struck another participant as odd, rather like describing economics as universal prosperity.
There are also questions about whether the AI investment boom will leave behind the computational infrastructure of the 21st century in the same way that previous investment booms bequeathed railway tracks, power grids and telecoms networks.
As the tech analysts Azeem Azhar and Nathan Warren noted in a recent essay, about one-third of AI-related capital expenditure is being sunk into shortlived assets, such as Nvidia’s graphics processing units. But GPUs age in dog years, as the authors put it, with a useful life for frontier applications of about three years. That implies that AI companies’ investments must generate a return within a few years, rather than generations. However, they add, that depreciation drag might also impose greater discipline on investors.
There seems little doubt that AI is opening up all kinds of scientific and economic possibilities that are as yet impossible to predict, model or value. Wolfram argued that AI could transform scientific discovery given the extra computing power that could be thrown at so many problems. As it is, humans rely on 100bn neurons to understand the universe. What becomes possible with the 100tn neurons that AI could in effect give us? The scientist said he had been living the “AI dream” for 40 years. Now, it seems, it is finally being realised. The question is: at what price?
John Thornhill is the Innovation Editor at the Financial Times.















