News

Broadcom’s $100B custom silicon bet

Company experiencing robust custom chip demand.

Jon Peddie

Broadcom just dropped a jaw-dropping number—$100 billion in AI chip revenue by 2027— and the math actually checks out. The company co-designs custom AI chips for Google, Meta, Anthropic, OpenAI, Fujitsu, and ByteDance, giving hyperscalers a powerful alternative to relying solely on Nvidia. With AI revenue already doubling year over year to $8.4 billion, Broadcom is quietly becoming one of the most important companies in AI infrastructure—designing the silicon that powers the entire ecosystem.

Broadcom CEO

Broadcom CEO Hock Tan projects AI chip revenue exceeding $100 billion by 2027, citing direct line of sight across six hyperscaler customers. Q1 FY2025 results validated the trajectory—AI revenue doubled year over year to $8.4 billion, total revenue rose 29% to $19.3 billion, and Q2 guidance of $22 billion exceeded analyst consensus by $1.5 billion. AI networking alone targets 40% of total AI revenue as Broadcom gains switching and interconnect share.

Broadcom’s model differs structurally from Nvidia’s. Rather than selling merchant silicon, Broadcom co-designs custom accelerators and TPUs with hyperscalers, then coordinates fabrication through TSMC. Current customers include Google, Meta, Anthropic, OpenAI, Fujitsu, and ByteDance. Tan confirmed 1 GW of Anthropic TPU deliveries in 2026, scaling to 3 GW in 2027. OpenAI’s first-generation XPU ships in 2027 with 1 GW-plus capacity. Meta’s MTIA custom accelerator roadmap remains active—Broadcom confirmed active shipments despite analyst speculation of program slowdown.

The $100 billion figure encompasses custom accelerators, networking silicon, DSPs, DPUs, and Ethernet switching—the full silicon stack that connects and runs GPU clusters, not just the accelerators themselves. Hyperscalers drive the demand logic: With $630 billion-plus in combined AI infrastructure CapEx planned for 2025, diversifying away from single-vendor GPU dependency reduces both supply risk and pricing exposure—a lesson the server CPU market taught a decade ago.

What do we think?

Broadcom’s $100B projection is not aspirational—it maps directly to named customers, confirmed gigawatt commitments, and a supply chain already secured through TSMC. The strategic implication is significant: Custom silicon is transitioning from a cost-control hedge into a primary AI compute architecture for hyperscalers. Broadcom sits at the center of that transition, designing the chips that reduce hyperscaler dependency on Nvidia, while Nvidia simultaneously scales to meet demand neither can fully address alone.

Broadcom’s $100B custom silicon roadmap marks an inflection point in AI semiconductor market structure. The genuine inflection point is architectural—hyperscalers are no longer supplementing GPU clusters with custom silicon, they are building parallel compute stacks designed from the ground up for specific LLM and inference workloads. That shift permanently fragments the AI silicon market, creating sustained demand for custom accelerator design, advanced packaging, and high-speed networking silicon alongside merchant GPUs. For semiconductor vendors, the competitive battleground is expanding beyond chip performance to full-stack silicon architecture and deeper foundry partnerships.

LIKE WHAT YOU’RE READING? TELL YOUR FRIENDS; WE DO THIS EVERY DAY, ALL DAY.

Nvidia Nemotron 3 Super