News

AMD and Samsung partnership on AI memory

Can’t be too rich, too thin, or have too much memory.

Jon Peddie

AMD and Samsung just made it official—their partnership is expanding well beyond the current HBM3E supply agreement into a full-stack strategic alignment covering HBM4, DDR5, and potentially Samsung foundry services for future AMD silicon. The timing is pointed: The announcement lands in the shadow of GTC 2026, where Nvidia and Samsung announced their own foundry deal for Groq chips. The memory wars are heating up, and AMD is making sure it has a seat at the table.

AMD and Samsung signed an MOU expanding their strategic collaboration across AI memory and computing. The agreement aligns Samsung as the primary HBM4 supplier for the Instinct MI455X GPU—AMD’s next-generation AI accelerator targeting the CDNA5 architecture—and commits Samsung to delivering DDR5 solutions optimized for 6th Gen Epyc CPUs, code-named Venice. Both product lines feed directly into the AMD Helios rack-scale platform, AMD’s answer to Nvidia’s NVL72 architecture.

Samsung already holds primary HBM3E partner status for AMD, supplying memory for the Instinct MI350X and MI355X. The MOU extends that relationship into HBM4, the next-generation standard that substantially raises bandwidth and capacity requirements over HBM3E. AMD and Samsung will optimize DDR5 specifically for Venice Epyc deployments, targeting data center customers building on Helios rack infrastructure.

The MOU also opens the door to a foundry partnership discussion—potentially positioning Samsung as a fabrication partner for future AMD products. That thread remains speculative, but it signals that AMD may be evaluating Samsung’s foundry as a complement or alternative to TSMC for specific product lines.

Timing amplifies the strategic weight. At GTC 2026, Nvidia and Samsung confirmed a foundry deal to manufacture Groq LP30 chips on Samsung’s LP30 process. SK Hynix supplies HBM to Nvidia. Samsung, by deepening its AMD relationship, pursues a parallel track—cementing itself as a critical supplier across both GPU camps, rather than aligning exclusively with either.

What do we think?

Samsung is executing a deliberate two-camp strategy, supplying HBM and potentially foundry services to AMD while manufacturing Groq chips for Nvidia. That hedged positioning reduces Samsung’s revenue concentration risk and raises its strategic indispensability. For AMD, locking in HBM4 supply ahead of the MI455X launch addresses the single biggest constraint facing any GPU vendor in 2026—memory allocation. AMD CEO Lisa Su’s public endorsement signals this is not a contingency agreement; it is a primary supply commitment.

The AMD-Samsung MOU marks an inflection point in AI infrastructure supply chain strategy. Memory is no longer a commodity—it is a competitive weapon, and allocation determines who ships on schedule. Samsung’s parallel alignment with both AMD and Nvidia reflects a structural shift in how memory vendors exercise market power: not by picking winners, but by becoming indispensable to all of them. That dynamic will define data center GPU availability for the next two product generations, reshaping competitive timelines across the entire AI accelerator market.

We now have 106 companies offering 181 AI processor chips, plus 24 companies offering AI processor IP for chips that haven’t been announced yet. We’re tracking all this as well as the tokens and TOPS performance, unit shipments, and market size.

YOU LIKE THIS KIND OF STUFF? WE HAVE LOTS MORE. TELL YOUR FRIENDS, WE LOVE MEETING NEW PEOPLE.