When everyone is digging for gold, sell shovels. This old maxim reflects the same logic as the current boom in artificial intelligence: the profits go to those who provide the chips and infrastructure, not the money-losing labs that build the models.
Nvidia’s record profits last year, which pushed the chipmaker’s market value past $5 trillion, show just how profitable the business has become.
But what happens if the world sells the wrong type of excavator? Analysts expect the global development of AI infrastructure – from data centers and chips to power and cooling – to cost several billion dollars over the coming years. Much of this hardware is designed to run extended language models (LLMs) such as OpenAI’s ChatGPT or Anthropic’s Claude.
But there’s the problem: If development shifts toward smaller or more efficient AI, or if new technologies reduce demand for high-end chips or the large amounts of power they require, much of this heavy infrastructure could end up being underutilized or even unprofitable. The world is investing billions of dollars in a single version of AI and risks enshrining this choice in the global economy. In fact, she puts all her eggs in one basket.
Evidence of this risk is emerging. Training the most advanced models is now astronomically expensive, but recent generations have made smaller improvements at a much higher cost. GPT-5, for example, required hundreds of thousands of Nvidia chips, but provided only modest performance gains.
If returns to scale continue to flatten, the world risks locking itself into an AI system that may never recoup the cost of the hardware it depends on. Yet funding and research are increasingly focused on LLMS, which rely on the same underlying transformer architecture that powers most large language models.
Smaller models and alternative technologies are much less attractive. So, even though this field could benefit from diversity, it is shrinking. Research in areas such as liquid neural networks or neuro-symbolic AI may slow as funding and talent continue to flow into transformer models.
History could repeat itself. In the late 19th century, American railroads laid far more track than many could hope to fill. It was often more profitable to build new lines than to operate them – a boom that ultimately ended in bust.
A century later, telecoms groups spent billions installing fiber-optic cables to meet predictions of explosive Internet growth — predictions that turned out to be far too optimistic. Much of this capacity sat idle for years, leading to bankruptcies.
Is the AI boom another classic case of speculative overbuilding? Some cracks are visible. When Google launched Gemini 3 last month – a chatbot widely seen as outperforming OpenAI’s – it trained the model on its own tensor processors rather than Nvidia’s chips, sending the company’s shares tumbling.
This highlighted how quickly the assumptions behind AI infrastructure can change. Earlier this year, Chinese company DeepSeek achieved peak performance with its R1 model, using none of Nvidia’s expensive and power-hungry processors.
If other companies adopt different hardware or more efficient models, much of the current AI infrastructure could prove unprofitable. Data centers, chip factories and electrical systems built for current LLMs would be difficult to reuse. The trillions invested may not disappear, but the returns might.
The risk is not only in overbuilding, but also in allowing too much power and money to be concentrated in too few companies. Since late 2022, the AI rally has pushed tech valuations to record levels – a trend that the European Central Bank says is now fueled as much by “FOMO” as reality.
These gains are very concentrated. Eight of the ten largest stocks in the S&P 500 are technology companies, together accounting for more than a third of the entire U.S. market. This exposes investors to heavy losses if the market deteriorates.
The risk is systemic. So much capital and market value is tied to a handful of companies – and a single AI model – that any shock would ripple through the global economy.
This dependency is clearest at OpenAI. The maker of ChatGPT has struck deals that could see it spend more than $1 trillion on computing power, funded largely by other big tech groups. These partnerships create an insidious web of financial ties that risk locking the AI industry into a single set of technologies and suppliers.
At some point, all this staggering spending will have to prove its worth. For now, the outcome remains uncertain. An MIT study suggests that 95% of AI projects generate no returns, and yet the money continues to flow in; no company wants to be left out of what could be the next industrial revolution. But few have demonstrated lasting productivity gains from generative AI and most are still experiments, even as development costs rise.
The world needs to hedge its bets on AI. Governments, investors and businesses should avoid tying their AI strategies to the same few vendors or technologies. Priority should be given to adaptable systems that can evolve as AI technology evolves. Organizations can consider modular data centers that can be reused if demand for AI infrastructure decreases. At the same time, it would be prudent to invest in other core technologies with long-term potential.
Investors, meanwhile, should look beyond the Magnificent 7 – Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla – and support smaller research labs developing different approaches. On the one hand, open source AI deserves much more attention. By sharing code and training data, these models help reduce development costs, accelerate innovation, and expand access to generative tools.
China supports open models that anyone can use or adapt, while Europe’s Mistral is a leader in open weight systems. Yet this approach still requires much stronger support from governments, investors and the technology industry. Mistral, valued at nearly 12 billion euros ($14 billion), remains Europe’s best hope to compete with its U.S. rivals, but it still operates at a fraction of their size.
For now, most of the money is still going to the same few companies and the same type of infrastructure. In the rush to sell shovels for the AI gold rush, we may be building the wrong guy – and setting the stage for another costly fix.
Amit Joshi is Professor of AI, Analytics and Marketing Strategy at IMD