News

Why, with 137 competitors already in the field, would OpenAI go custom?

Latest AIP partnerships announced

Jon Peddie

OpenAI is assembling a portfolio of accelerators: a 6-GW AMD Instinct deployment (MI450 from 2H26), a LOI for 10-GW of Nvidia systems, and a Broadcom partnership to co-develop 10-GW of custom AIPs plus OpenAI-designed racks launching late next year. Motives include supplier independence and architectures tailored to new LLM methods. Data centers scale by megawatts, not cores. With its most recent deal with Broadcom, OpenAI says its designing and building its own chips with the help of Broadcom

AI generated image of smoking plant to suggest OpenAI's ambitions to design chips and build data centers.

Based on recent announcements, OpenAI will have a zoo of AI Processors (AIPs):

AMD: OpenAI and AMD announced a multi-year partnership on Oct 6, 2025, to deploy 6 GW of AMD Instinct GPUs, starting with 1 GW of MI450 systems in 2H 2026.

Nvidia: OpenAI and Nvidia announced a letter of intent on September 22, 2025, for a landmark partnership to deploy at least 10 GW of Nvidia systems; Nvidia also stated it intends to invest up to $100 billion as those systems are deployed.

OpenAI and Broadcom announced on Monday that they’re jointly developing and deploying 10 gigawatts of custom artificial intelligence accelerators as part of a broader industry effort to scale AI infrastructure.

It’s this latest announcement that is particularly interesting. OpenAI and Broadcom have been collaborating for the last 18 months; now they are going public with a plan to develop and deploy racks of OpenAI-designed chips beginning late next year.

There can be (at least) two reasons for OpenAI to engage with Broadcom for a custom AIP:

  1. Independence from the market leaders, Nvidia and AMD, and possibly the freedom to establish AI datacenters in China.
  2. New tricks. OpenAI may have developed new LLM algorithms that require data processing differently than conventional AIP GPUs.

The answer is most likely a combination of the two. So now the industry will hold its breath and wait for the custom AIP announcement of features from OpenAI. That will likely come with the announcement of a new transformer, ML, LLM, thingy that will save the world, and eliminate the need for all human labor.

And since datacenters are no longer sized by processor core count or memory capacity, but rather in terms of power consumption, these announcements are also a boon and a challenge for the localities when the power-hungry datacenters get built.

Hyperscalers have driven strong demand for Broadcom’s custom AI chips—marketed as XPUs—making the company a major winner in the generative-AI cycle.

In the context of the Broadcom and OpenAI announcement, OpenAI President Greg Brockman said they were able to take advantage of OpenAI’s technology to design the chips. “We’ve been able to get massive area reductions,” he said in the podcast. “You take components that humans have already optimized and just pour compute into it, and the model comes out with its own optimizations.”

What do we think

In the process of creating the 2025 AI Processor Market Report, we’ve been struck by the number and diversity of chips being designed to take advantage of the exploding AI market. Undoubtedly, these chips will fundamentally change the AI industry as it develops, but they’re also going to complicate the market. Not all these strategies will succeed. OpenAI has boundless confidence in its methods, and it has been playing a winning hand, but how is it going to manage all its cards?

If you’re interested in AI processors, we have a fabulous 355-page market report on various designs, market segments, and all the suppliers.