News

Meta plays hybrid AI

While investors read tea leaves, hoping for insight.

Jon Peddie

AMD got a lift after a report suggested Meta plans to buy its MI455X AI boards, signaling deeper ties between the two companies. Meta appears to favor AMD over Google’s TPUs for upcoming deployments, even as it pauses parts of its in-house chip effort. The move reflects a balanced strategy: AMD handles many inference workloads, Nvidia still powers large-scale training, and Meta continues developing its own chips while exploring other options to diversify its AI hardware stack.

AMD got a boost after a report from Haitong International Securities analyst Jeff Pu that Meta Platforms intends to purchase MI455X AI boards from AMD. The report claimed that Meta Platforms will switch to AMD MI455X chips as the social media and AI company scales back its in-house ASIC plans.

Meta Platforms is one of the AI companies that has been working on its own custom AIP to circumvent reliance on Nvidia. However, Pu suggests that it appears to have hit a snag with its desire to acquire GPUs from AMD. The same report also noted that Meta Platforms chose AMD’s MI455X over Google’s TPUs for its upcoming deployments, rejecting previous rumors that Meta Platforms would acquire its new AI processors from the Alphabet.

It makes sense that Meta Platforms would choose AMD as its main source for AI chips. Meta and AMD already have a close relationship, with Meta being the semiconductor company’s top AI accelerator customer in 2025, making up roughly 42% of its AI GPU sales. Ongoing support for AMD AI chips from a major player like Meta Platforms is a positive sign for AMD stock.

But…

Meta is not abandoning its own AI processors or Nvidia entirely, even as it is significantly increasing its reliance on AMD for AI accelerators. 

Rather, Meta is taking a hybrid approach, with AMD becoming a major, if not primary, partner for specific AI inference workloads, while Nvidia continues to supply high-end hardware for training. 

Meta will likely switch to AMD for inference: Meta has adopted AMD’s Instinct MI300X accelerators to handle all of its live AI traffic (e.g., sticker generation, image editing, and assistant operations) due to their high memory capacity and cost-efficiency.

The company will continue to rely on Nvidia: Meta remains a large customer of Nvidia, with plans to operate over 1.3 million GPUs by the end of 2025, a mix that still includes significant amounts of Nvidia hardware.

Meta is still developing its own custom AIP (MTIA) as a hedge and is being deployed for Meta Training.

And recent reports suggest that Meta is considering purchasing custom tensor processing units (TPUs) from Google by 2027 to further diversify its infrastructure and reduce its reliance on Nvidia and AMD. 

What do we think?

Meta is a key partner for AMD, with reports in early 2025 suggesting that Meta is a top consumer of AMD AI accelerators.

Meta’s strategy is to create a more diverse, open-source-friendly hardware stack that is not entirely dependent on Nvidia’s CUDA ecosystem.

While AMD has seen success in Meta’s inference (running models) workloads, Nvidia is still widely used for heavy training. 

AMD, Nvidia, Meta, and Google are but four of the 143 companies building AI processors, and most of them are seeking the big hyperscaler player’s attention. If you want to know more about the AIP suppliers, go here and here.

LIKE WHAT YOU SAW HERE? SHARE THE EXPERIENCE, TELL YOUR FRIENDS.