One year ago, I published a widely discussed article addressing the potential bubble surrounding artificial intelligence in the technology markets. In that piece, I identified three major indicators that pointed towards the existence of an AI bubble, including the escalating prices for AI chips, the questionable accounting practices of large language model (LLM) providers, and the frenzy of venture capital (VC) investment in the AI sector. Fast forward to today, and my earlier concerns appear unfounded as we near the end of the third quarter, and the anticipated AI crash has yet to materialize.
On the contrary, the ongoing AI rally seems unstoppable.
Notably, the dominance of Nvidia as the world’s most valuable company has only strengthened, with its market capitalization soaring by over 30% since the beginning of the year, reaching an astounding $4.3 trillion. Similarly, the pre-IPO valuation of OpenAI has skyrocketed from $157 billion to an estimated $500 billion within a mere twelve months. These staggering amounts highlight a gold rush in the AI domain that surpasses anything the business world has encountered previously.
Index du contenu:
The staggering investments shaping AI infrastructure
Recent headlines reveal ambitious financial commitments in the AI sector. For instance, the Stargate Project aims to channel $500 billion, primarily from giants like Oracle, Softbank, and OpenAI, into AI data centers across the United States. Additionally, Nvidia plans to invest $100 billion in OpenAI, beginning with an initial cash infusion of $10 billion. In exchange, Nvidia will receive 2% of OpenAI’s non-voting shares, thereby solidifying its valuation at $500 billion. Furthermore, OpenAI has announced intentions to invest over $1 trillion in the coming years.
The colossal figures associated with these recent AI developments have led to significant stock price surges for involved companies, particularly Nvidia and Oracle, as their stocks continue to reach new heights. In the current market environment, it seems that merely increasing investments in AI infrastructure is viewed as a clear indicator of success.
Market dynamics and critical inquiries
Despite this bullish sentiment, few investors seem to be raising questions about the underlying mechanics of these AI deals. As I reflect on these developments, I find myself pondering several observations and uncertainties. For instance, Nvidia has managed to maintain stable prices for its GPUs despite increasing competition, with the cost of a server equipped with 72 Blackwell chips hovering around $3 million, or approximately $40,000 per GPU. This price stability suggests a strong demand, yet the fluctuating cloud service rates for utilizing B200 GPUs indicate a competitive market landscape among Nvidia’s clients.
Moreover, Nvidia’s impressive profit margins have not only remained intact but have also increased recently, with an operating margin of 61% reported in the last quarter. This raises questions about the sustainability of such margins in the face of a competitive market. Some critics suggest that this success may stem from a combination of technological superiority and intricate financial strategies, which some deem as a form of round-trip business logic.
The intricate web of financial engineering
To illustrate this, consider Nvidia’s investment in CoreWeave, which has been purchasing Nvidia hardware at scale and leasing the GPUs to major players like OpenAI and Microsoft. Recently, Nvidia pledged to acquire up to $6.3 billion in unsold cloud capacity from CoreWeave, creating a financial safety net for the company. This arrangement not only ensures high-priced sales for Nvidia but also stabilizes CoreWeave’s sales beyond its two primary clients.
In another intriguing move, Nvidia recently rented 18,000 GPUs from Lambda for $1.5 billion, many of which it had previously sold to the company. This transaction allows Nvidia to generate revenue now while incurring future rental costs. Furthermore, Nvidia’s investment in Lambda enhances its partnership with another cloud provider that utilizes its chips.
Exploring financing structures
Recently, Nvidia and OpenAI announced a significant letter of intent, wherein OpenAI plans to construct its own data centers and deploy at least 10 GW of Nvidia systems. To put this into perspective, this capacity could power multiple large cities, while Microsoft’s Azure Cloud had a capacity of only about 5 GW. The projected expenses for these OpenAI data centers range from $500 billion to $600 billion, with Nvidia’s equipment costs estimated at $350 billion to $450 billion.
Nvidia’s strategy to invest up to $100 billion in OpenAI, alongside the delivery of chips, may involve leasing arrangements that reduce OpenAI’s financial burden. Given OpenAI’s anticipated cash burn due to high operational costs, this leasing strategy could prove essential for the company’s survival and growth.
However, the concept of GPU-backed financing raises concerns among market observers. Financial markets have emerged for Neoclouds like CoreWeave and Lambda, with over $10 billion in GPU-backed loans being facilitated by major private credit institutions. This raises questions about the long-term stability of Nvidia’s GPUs as collateral in these financing arrangements.
Despite the apparent logic behind the leasing deal with Nvidia, I remain skeptical about OpenAI’s long-term profitability and its ability to sustain such financial structures. The risks of a market crash loom large, underscored by Sam Altman’s own acknowledgment of a prevailing AI hype that could lead to significant investor losses.
In conclusion, while the AI investment landscape continues to flourish, it is imperative for investors to approach this space with caution. Understanding the financial dynamics at play and preparing for potential volatility will be crucial in navigating the future of AI.