Nvidia’s Billions Deal With Meta Marks a Shift in Computing

2 min read

For decades, Nvidia has been defined by one product: GPUs. But a series of recent moves suggest the chip giant is hedging its bets—and CPUs are becoming part of the story.

The multi-year partnership is an expansion of an already cozy relationship. By the end of 2024, Meta had already purchased 350,000 H100 chips from Nvidia, and by the end of 2025, the company expected access to 1.3 million GPUs.

Now, Meta will deploy millions of Nvidia Blackwell and Rubin GPUs alongside a large-scale deployment of Nvidia CPUs.

Why CPUs Matter Now

The shift reflects a fundamental change in how AI software works. Agentic AI puts new demands on general-purpose CPU architectures, says Ben Bajarin, CEO of Creative Strategies.

In traditional cloud computing, CPUs handled most workloads. But AI training required powerful GPUs. Now, with agentic AI becoming mainstream, CPUs are making a comeback—not to replace GPUs, but to handle the coordination and data management that AI agents need.

A recent Semianalysis report noted that Microsoft data centers for OpenAI now need tens of thousands of CPUs just to process and manage the petabytes of data generated by GPUs.

The Bigger Picture

This deal comes as AI giants are racing to diversify their compute power. OpenAI is working with Broadcom to create custom chips. Google uses its own TPUs. Anthropic relies on a mix of Nvidia GPUs, Google TPUs, and Amazon chips.

Meta itself plans to dramatically increase AI infrastructure spending to between $115 billion and $135 billion this year, up from $72.2 billion last year.

Share this article

Related Articles