TechWatch

Meta’s AI chips epitomize hybrid hyperscalers

MTIA targets inference at scale agnostically.

Jon Peddie
Various processors in Meta hyperscalers

Meta’s homegrown AI chip program just got a lot more serious. The MTIA family now spans six generations—from MTIA 100 through MTIA 500—with four successive chips either already deployed or scheduled for 2026 and 2027. Built in partnership with Broadcom, optimized inference-first, and architected around modular chiplets, MTIA delivers a 4.5× HBM bandwidth gain and 25× compute FLOPS increase from MTIA 300 to MTIA 500 in under two years. In Meta hyperscalers, you can find AMD, Broadcom, and Nvidia AI processors. Meta’s AI infrastructure serves billions of users daily across personalized recommendations, feed ranking, and generative AI assistants. Keeping those
...

Enjoy full access with a TechWatch subscription!

TechWatch is the front line of JPR information gathering service, comprising current stories of interest to the graphics industry spanning the core areas of graphics hardware and software, workstations, gaming, and design.

A subscription to TechWatch includes 4 hours of consulting time to be used over the course of the subscription.

Already a subscriber? Login below

This content is restricted

Subscribe to TechWatch