- Meta explores new hardware paths as cloud providers race to secure capacity
- Google positions its TPUs as a credible option for large-scale deployments
- Data Center Operators Face Rising Component Costs Across Multiple Hardware Categories
Meta is reportedly in advanced discussions to secure large quantities of custom AI hardware from Google for future development work.
Negotiations revolve around leasing Google Cloud Tensor Processing Units (TPUs) in 2026 and transitioning to direct purchases in 2027.
This is a shift for both companies, as Google has historically limited its TPUs to internal workloads, while Meta has relied on a broad mix of CPUs and GPUs from multiple vendors.
Meta is also exploring broader hardware options, including interest in Rivos’ RISC-V processors, suggesting a broader move to diversify its compute base.
The possibility of a multi-billion dollar deal caused immediate market changes, with Alphabet’s valuation rising sharply, putting it near the $4 trillion mark, while Meta also saw its stock rise following the reports.
Nvidia’s stock fell several percentage points as investors speculated about the long-term effect of major cloud providers’ spending shifting toward alternative architectures.
Estimates from Google Cloud executives suggest that a successful deal could allow Google to capture a significant share of Nvidia’s data center revenue, which exceeds $50 billion in a single quarter this year.
The scale of demand for AI tools has created intense competition for supply, raising questions about how new hardware partnerships could influence the stability of the sector.
Even if the transaction proceeds as planned, it will enter a market that remains constrained by limited manufacturing capacity and aggressive deployment timelines.
Data center operators continue to report shortages of GPUs and memory modules, with prices expected to rise through next year.
The rapid expansion of AI infrastructure has strained supply chains for every major component, and current trends suggest that supply pressures could intensify as companies race to secure long-term hardware commitments.
These factors create uncertainty about the real impact of the deal, as the broader supply environment may limit production volume, regardless of financial investment.
Analysts caution that the future performance of any of these architectures remains uncertain.
Google maintains an annual release schedule for its TPUs, while Nvidia continues to iterate its own designs at the same speed.
The competitive landscape could change again before Meta receives its first big hardware shipment.
There is also the question of whether alternative designs can provide longer operational value than existing GPUs.
The rapid evolution of AI workloads means that device suitability can change significantly, and this dynamic highlights why companies continue to diversify their compute strategies and explore multiple architectures.
Via Tom’s Hardware
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




