Article

The AI Bubble and the U.S. Economy: How Long Do “Hallucinations” Last?


This paper argues that (i) we have reached “peak GenAI” in terms of current Large Language Models (LLMs); scaling (building more data centers and using more chips) will not take us further to the goal of “Artificial General Intelligence” (AGI); returns are diminishing rapidly; (ii) the AI-LLM industry and the larger U.S. economy are experiencing a speculative bubble, which is about to burst.

The U.S. is undergoing an extraordinary AI-fueled economic boom: The stock market is soaring thanks to exceptionally high valuations of AI-related tech firms, which are fueling economic growth by the hundreds of billions of U.S. dollars they are spending on data centers and other AI infrastructure. The AI investment boom is based on the belief that AI will make workers and firms significantly more productive, which will in turn boost corporate profits to unprecedented levels. But the summer of 2025 did not bring good news for enthusiasts of generative Artificial Intelligence (GenAI) who were all hyped up by the inflated promise of the likes of OpenAI’s Sam Altman that “Artificial General Intelligence” (AGI), the holy grail of current AI research, would be right around the corner.

Let us more closely consider the hype. Already in January 2025, Altman wrote that “we are now confident we know how to build AGI”. Altman’s optimism echoed claims by OpenAI’s partner and major financial backer Microsoft, which had put out a paper in 2023 claiming that the GPT-4 model already exhibited “sparks of AGI.” Elon Musk (in 2024) was equally confident that the Grok model developed by his company xAI would reach AGI, an intelligence “smarter than the smartest human being”, probably by 2025 or at least by 2026. Meta CEO Mark Zuckerberg said that his company was committed to “building full general intelligence”, and that super-intelligence is now “in sight”. Likewise, Dario Amodei, co-founder and CEO of Anthropic, said “powerful AI”, i.e., smarter than a Nobel Prize winner in any field, could come as early as 2026, and usher in a new age of health and abundance — the U.S. would become a “country of geniuses in a datacenter”, if ….. AI didn’t wind up killing us all.

For Mr. Musk and his GenAI fellow travelers, the biggest hurdle on the road to AGI is the lack of computing power (installed in data centers) to train AI bots, which, in turn, is due to a lack of sufficiently advanced computer chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would exceed the capacity of the global credit and derivative securities markets. Spurred by the imperative to win the AI-race with China, the GenAI propagandists firmly believe that the U.S. can be put on the yellow brick road to the Emerald City of AGI by building more data centers faster (an unmistakenly “accelerationist” expression).

Interestingly, AGI is an ill-defined notion, and perhaps more of a marketing concept used by AI promotors to persuade their financiers to invest in their endeavors. Roughly, the idea is that an AGI model can generalize beyond specific examples found in its training data, similar to how some human beings can do almost any kind of work after having been shown a few examples of how to do a task, by learning from experience and changing methods when needed. AGI bots will be capable of outsmarting human beings, creating new scientific ideas, and doing innovative as well as all of routine coding. AI bots will be telling us how to develop new medicines to cure cancer, fix global warming, drive our cars and grow our genetically modified crops. Hence, in a radical bout of creative destruction, AGI would transform not just the economy and the workplace, but also systems of health care, energy, agriculture, communications, entertainment, transportation, R&D, innovation and science.

OpenAI’s Altman boasted that AGI can “discover new science,” because “I think we’ve cracked reasoning in the models,” adding that “we’ve a long way to go.” He “think[s] we know what to do,” saying that OpenAI’s o3 model “is already pretty smart,” and that he’s heard people say “wow, this is like a good PhD.” Announcing the launch of ChatGPT-5 in August, Mr. Altman posted on the internet that “We think you will love using GPT-5 much more than any previous Al. It is useful, it is smart, it is fast [and] intuitive. With GPT-5 now, it’s like talking to an expert — a legitimate PhD level expert in anything any area you need on demand, they can help you with whatever your goals are.”

But then things began to fall apart, and rather quickly so.

ChatGPT-5 is a letdown

The first piece of bad news is that much-hyped ChatGPT-5 turned out to be a dud — incremental improvements wrapped in a routing architecture, nowhere near the breakthrough to AGI that Sam Altman had promised. Users are underwhelmed. As the MIT Technology Review reports: “The much-hyped release makes several enhancements to the ChatGPT user experience. But it’s still far short of AGI.” Worryingly, OpenAI’s internal tests show GPT-5 ‘hallucinates’ in circa one in 10 responses of the time on certain factual tasks, when connected to the internet. However, without web-browsing access, GPT-5 is wrong in almost 1 in 2 responses, which should be troublesome. Even more worrisome, ‘hallucinations’ may also reflect biases buried within datasets. For instance, an LLM might ‘hallucinate’ crime statistics that align with racial or political biases simply because it has learned from biased data.

Of note here is that AI chatbots can be and are actively used to spread misinformation (see here and here). According to recent research, chatbots spread false claims when prompted with questions about controversial news topics 35% of the time — almost double the 18% rate of a year ago (here). AI curates, orders, presents, and censors information, influencing interpretation and debate, while pushing dominant (average or preferred) viewpoints while suppressing alternatives, quietly removing inconvenient facts or making up convenient ones. The key issue is: Who controls the algorithms? Who sets the rules for the tech bros? It is evident that by making it easy to spread “realistic-looking” misinformation and biases and/or suppress critical evidence or argumentation, GenAI does and will have non-negligible societal costs and risks — which have to be counted when assessing its impacts.

Building larger LLMs is leading nowhere

The ChatGPT-5 episode raises serious doubts and existential questions about whether the GenAI industry’s core strategy of building ever-larger models on ever-larger data distributions has already hit a wall. Critics, including cognitive scientist Gary Marcus (here and here), have long argued that simply scaling up LLMs will not lead to AGI, and GPT-5’s sorry stumbles do validate those concerns. It is becoming more widely understood that LLMs are not constructed on proper and robust world models, but instead are built to autocomplete, based on sophisticated pattern-matching — which is why, for example, they still cannot even play chess reliably and continue to make mind-boggling errors with startling regularity.

My new INET Working Paper discusses three sobering research studies showing that novel ever-larger GenAI models do not become better, but worse, and do not reason, but rather parrot reasoning-like text. To illustrate, a recent paper by scientists at MIT and Harvard shows that even when trained on all of physics, LLMs fail to uncover even the existing generalized and universal physical principles underlying their training data. Specifically, Vafa et al. (2025) note that LLMs that follow a “Kepler-esque” approach: they can successfully predict the next position in a planet’s orbit, but fail to find the underlying explanation of Newton’s Law of Gravity (see here). Instead, they resort to fitting made-up rules, that allow them to successfully predict the planet’s next orbital position, but these models fail to find the force vector at the heart of Newton’s insight. The MIT-Harvard paper is explained in this video. LLMs cannot and do not infer physical laws from their training data. Remarkably, they cannot even identify the relevant information from the internet. Instead, they make it up.

Worse, AI bots are incentivized to guess (and give an incorrect response) rather than admit they do not know something. This problem is recognized by researchers from OpenAI in a recent paper. Guessing is rewarded — because, who knows, it might be right. The error is at present uncorrectable. Accordingly, it might well be prudent to think of “Artificial Information” rather than “Artificial Intelligence” when using the acronym AI. The bottom line is straightforward: this is very bad news for anyone hoping that further scaling — building ever larger LLMs — would lead to better outcomes (see also Che 2025).

95% of generative AI pilot projects in companies are failing

Corporations had rushed to announce AI investments or claim AI capabilities for their products in the hope of turbocharging their share prices. Then came the news that the AI tools are not doing what they are supposed to do and that people are realizing it (see Ed Zitron). An August 2025 report titled The GenAI Divide: State of AI in Business 2025, published by MIT’s NANDA initiative, concludes that 95% of generative AI pilot projects in companies are failing to raise revenue growth. As reported by Fortune, “generic tools like ChatGPT [….] stall in enterprise use since they don’t learn from or adapt to workflows”. Quite.

Indeed, firms are backpedaling after cutting hundreds of jobs and replacing these by AI. For instance, Swedish “Buy Burritos Now, Pay Later” Klarna bragged in March 2024 that its AI assistant was doing the work of (laid-off) 700 workers, only to rehire them (sadly, as gig workers) in the summer of 2025 (see here). Other examples include IBM, forced to reemploy staff after laying off about 8,000 workers to implement automation (here). Recent U.S. Census Bureau data by firm size show that AI adoption has been declining among companies with more than 250 employees.

MIT economist Daren Acemoglu (2025) predicts rather modest productivity impacts of AI in the next 10 years and warns that some applications of AI may have negative social value. “We’re still going to have journalists, we’re still going to have financial analysts, we’re still going to have HR employees,” Acemoglu says. “It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, etc. And those are essentially about 5% of the economy.” Similarly, using two large-scale AI adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers in 7,000 workplaces) in Denmark, Anders Humlum and Emilie Vestergaard (2025) show, in a recent NBER Working Paper, that the economic impacts of GenAI adoption are minimal: “AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 3%), combined with weak wage pass-through, help explain these limited labor market effects.” These findings provide a much-needed reality check for the hyperbole that GenAI is coming for all of our jobs. Reality is not even close.

GenAI will not even make tech workers who do the coding redundant, contrary to the prediction by AI enthusiasts. OpenAI researchers have found (in early 2025) that advanced AI models (including GPT-4o and Anthropic’s Claude 3.5 Sonnet) still are no match for human coders. The AI bots failed to grasp how widespread bugs are or to understand their context, leading to solutions that are incorrect or insufficiently comprehensive. Another new study from the nonprofit Model Evaluation and Threat Research (METR) finds that in practice, programmers, using early 2025-AI-tools, are actually slower when using AI assistance tools, spending 19 percent more time when using GenAI than when actively coding by themselves (see here). Programmers spent their time on reviewing AI outputs, prompting AI systems, and correcting AI-generated code.

The U.S. economy at large is hallucinating

The disappointing rollout of ChatGPT-5 raises doubts about OpenAI’s ability to build and market consumer products that users are willing to pay for. But the point I want to make is not just about OpenAI: the American AI industry as a whole has been built on the premise that AGI is just around the corner. All that is needed is sufficient “compute”, i.e., millions of Nvidia AI GPUs, enough data centers and sufficient cheap electricity to do the massive statistical pattern mapping needed to generate (a semblance of) “intelligence”. This, in turn, means that “scaling” (investing billions of U.S. dollars in chips and data centers) is the one-and-only way forward — and this is exactly what the tech firms, Silicon Valley venture capitalists and Wall Street financiers are good at: mobilizing and spending funds, this time for “scaling-up” generative AI and building data centers to support all the expected future demand for AI use.

During 2024 and 2025, Big Tech firms invested a staggering $750 billion in data centers in cumulative terms and they plan to roll out a cumulative investment of $3 trillion in data centers during 2026-2029 (Thornhill 2025). The so-called “Magnificent 7” (Alphabet, Apple, Amazon, Meta, Microsoft, Nvidia, and Tesla) spent more than $100 billion on data centers in the second quarter of 2025; Figure 1 gives the capital expenditures for four of the seven corporations.

FIGURE 1

Christopher Mims (2025), https://x.com/mims/status/1951…

The surge in corporate investment in “information processing equipment” is huge. According to Torsten Sløk, chief economist at Apollo Global Management, data center investments’ contribution to (sluggish) real U.S. GDP growth has been the same as consumer spending over the first half of 2025 (Figure 2). Financial investor Paul Kedrosky finds that capital expenditures on AI data centers (in 2025) have passed the peak of telecom spending during the dot-com bubble (of 1995-2000).

FIGURE 2

Source: Torsten Sløk (2025). https://www.apolloacademy.com/…

Following the AI hype and hyperbole, tech stocks have gone through the roof. The S&P500 Index rose by circa 58% during 2023-2024, driven mostly by the growth of the share prices of the Magnificent Seven. The weighted-average share price of these seven corporations increased by 156% during 2023-2024, while the other 493 firms experienced an average increase in their share prices of just 25%. America’s stock market is largely AI-driven.

Nvidia’s shares rose by more than 280% over the past two years amid the exploding demand for its GPUs coming from the AI firms; as one of the most high-profile beneficiaries of the insatiable demand for GenAI, Nvidia now has a market capitalization of more than $4 trillion, which is the highest valuation ever recorded for a publicly traded company. Does this valuation make sense? Nvidia’s price-earnings (P/E) ratio peaked at 234 in July 2023 and has since declined to 47.6 in September 2025 which is still historically very high (see Figure 3). Nvidia is selling its GPUs to neocloud companies (such as CoreWeave, Lambda, and Nebius), which are funded by credit, from Goldman Sachs, JPMorgan, Blackstone and other Wall Street private equity firms, collateralized by the data centers filled with GPUs. In key cases, as explained by Ed Zitron, Nvidia offered the neocloud companies, which are loss making, to buy unsold cloud compute worth billions of U.S. dollars, effectively backstopping its clients — all in the expectation of an AI revolution that still has to arrive.

Likewise, the share price of Oracle Corp. (which is not included in the “Magnificent 7”) rose by more than 130% during mid-May and early September 2025 following the announcement of its $300 billion cloud-computing infrastructure deal with OpenAI. Oracle’s P/E ratio shot up to almost 68, which means that financial investors are willing to pay almost $68 for $1 of Oracle’s future earnings. One obvious problem with this deal is that OpenAI doesn’t have $300 billion; the company made a loss of $15 billion during 2023-2025 and is projected to make a further cumulative loss of $28 billion during 2026-2028 (see below). It is unclear and uncertain where OpenAI will get the money from. Ominously, Oracle needs to build the infrastructure for OpenAI before it can collect any revenue. If OpenAI cannot pay for the enormous computing capacity it agreed to buy from Oracle, which seems likely, Oracle will be left with the expensive AI infrastructure, for which it may not be able to find alternative customers, especially once the AI bubble fizzles out.

Tech stocks thus are considerably overvalued. Torsten Sløk, chief economist at Apollo Global Management, warned (in July 2025) that AI stocks are even more overvalued than dot-com stocks were in 1999. In a blogpost, he illustrates how P/E ratios of Nvidia, Microsoft and eight other tech companies are higher than during the dot-com era (see Figure 3). We all remember how the dot-com bubble ended and hence Sløk is right in sounding the alarm over the apparent market mania, driven by the “Magnificent 7” that are all heavily invested in the AI industry.

Big Tech does not buy these data centers and operate them itself; instead the data centers are built by construction companies and then purchased by data center operators who lease them to (say) OpenAI, Meta or Amazon (see here). Wall Street private equity firms such as Blackstone and KKR are investing billions of dollars to buy up these data center operators, using commercial mortgage-backed securities as source funding. Data center real estate is a new, hyped-up asset class that is beginning to dominate financial portfolios. Blackstone calls data centers one of its “highest conviction investments.” Wall Street loves the lease-contracts of data centers which offer long-term stable, predictable income, paid by AAA-rated clients like AWS, Microsoft and Google. Some Cassandras are warning of a potential oversupply of data centers, but given that “the future will be based on GenAI”, what could possibly go wrong?

FIGURE 3


Source: Torsten Sløk (2025), https://www.apolloacademy.com/…

In a rare moment of frankness, OpenAI CEO Sam Altman had it right. “Are we in a phase where investors as a whole are overexcited about AI?” Altman said during a dinner interview with reporters in San Francisco in August. “My opinion is yes.” He also compared today’s AI investment frenzy to the dot-com bubble of the late 1990s. “Someone’s gonna get burned there, I think,” Altman said. “Someone is going to lose a phenomenal amount of money – we don’t know who …”, but (going by what happened in earlier bubbles) it will most likely not be Altman himself.

The question therefore is: How long investors will continue to prop up sky-high valuations of the key firms in the GenAI race remains to be seen. Earnings of the AI industry continue to pale in comparison to the tens of billions of U.S. dollars that are spent on data center growth. According to an upbeat S&P Global research note published in June, 2025 the GenAI market is projected to generate $85 billion in revenue 2029. However, Alphabet, Google, Amazon and Meta together will spend nearly $400 billion on capital expenditures in 2025 alone. At the same time, the AI industry has a combined revenue that is little more than the revenue of the smart-watch industry (Zitron 2025).

So, what if GenAI just is not profitable? This question is pertinent in view of the rapidly diminishing returns to the stratospheric capital expenditures on GenAI and data centers and the disappointing user-experience of 95% of firms that adopted AI. One of the largest hedge funds in the world, Florida-based Elliott, told clients that AI is overhyped and Nvidia is in a bubble, adding that many AI products are “never going to be cost-efficient, never going to actually work right, will take up too much energy, or will prove to be untrustworthy.” “There are few real uses,” it said, other than “summarizing notes of meetings, generating reports and helping with computer coding”. It added that it was “skeptical” that Big Tech companies would keep buying the chipmaker’s graphics processing units in such high volumes.

Locking billions of U.S. dollars in into AI-focused data centers without a clear exit strategy for these investments in case the AI craze ends, only means that systemic risk in finance and the economy is building. With data-center investments driving U.S. economic growth, the American economy has become dependent on a handful of corporations, which have not yet managed to generate one dollar of profit on the ‘compute’ done by these data center investments.

America’s high-stakes geopolitical bet gone wrong

The AI boom (bubble) developed with the support of both major political parties in the U.S. The vision of American firms pushing the AI frontier and reaching GenAI first is widely shared — in fact, there is a bipartisan consensus on how important it is that the U.S. should win the global AI race. America’s industrial capability is critically dependent on a number of potential adversary nation-states, including China. In this context, America’s lead in GenAI is considered to constitute a potential very powerful geopolitical lever: If America manages to get to AGI first, so the analysis goes, it can build up an overwhelming long-term advantage over especially China (see Farrell).

That is the reason why Silicon Valley, Wall Street and the Trump administration are doubling down on the “AGI First” strategy. But astute observers highlight the costs and risks of this strategy. Prominently, Eric Schmidt and Selina Xu worry, in the New York Times of August 19, 2025, that “Silicon Valley has grown so enamored with accomplishing this goal [of AGI] that it’s alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists. In being solely fixated on this objective, our nation risks falling behind China, which is far less concerned with creating A.I. powerful enough to surpass humans and much more focused on using the technology we have now.”

Schmidt and Xu are rightly worried. Perhaps the plight of the U.S. economy is captured best by OpenAI’s Sam Altman who fantasizes about putting his data centers in space: “Like, maybe we build a big Dyson sphere around the solar system and say, “Hey, it actually makes no sense to put these on Earth.”” For as long as such ‘hallucinations’ on using solar-collecting satellites to harvest (unlimited) star power continue to convince gullible financial investors, the government and users of the “magic” of AI and the AI industry, the U.S. economy is surely doomed.

Share your perspective