Broadcom has recently partnered with OpenAI on a $10 billion AI data center hardware supply agreement, a move highlighting its strategic push into AI technology. The deal includes custom AI accelerators tailored for OpenAI's workloads, along with other core hardware, and could involve millions of chips—enough to power multiple hyperscale AI clusters.
CEO Hock Tan confirmed on the company's earnings call that one of its qualifying customers has moved from prototype to mass production, signaling the commercial adoption of Broadcom's XPU-based AI racks. These modular units integrate custom accelerators, network chips, and reference architectures, enabling large-scale AI infrastructure deployment. Deliveries are expected by Q3 2026, aligning with reports that OpenAI's custom chips could go live by late 2026 or early 2027.
The chips are reportedly built on TSMC's 3nm-class N3 process and use a pulsed array architecture optimized for matrix computations with high-bandwidth memory (HBM). Analysts note that a $10 billion investment in hardware represents a bold move for OpenAI, comparable to the AI-related capital expenditures of leading cloud providers. This shift toward custom infrastructure may also give OpenAI leverage in negotiations with GPU vendors such as AMD and Nvidia.
The collaboration coincides with Broadcom's strong quarterly results, including a 63% increase in AI revenue and a 36% rise in its share price. By focusing on AI chip development and hyperscale partnerships, Broadcom strengthens its technological leadership and may accelerate revenue and margin growth. While the company benefits from strong financials and promising AI initiatives, it faces risks related to dependence on a few hyperscale customers and geopolitical uncertainties.
With this historic hardware deal, Broadcom and OpenAI are set to reshape the AI compute landscape, potentially triggering a new wave of innovation and competition in large-scale AI deployments.
+86 191 9627 2716
+86 181 7379 0595
8:30 a.m. to 5:30 p.m., Monday to Friday