Back to the future—full-circle on GPU construction

Out, in, and out again—it’s never boring in semi land

Posted: By Jon Peddie 02.25.22

AMD and Intel have been leading with the adoption of chiplets for GPUs. Nvidia recently published another paper on how they plan to do it. The era of monster chips may be over. AMD will also migrate their APUs to chiplets with GPU and CPU separated. That is ironic. Part of the motivation of Hector Ruiz and Dave Orton for the merger of AMD and ATI was the (merger) integration of the CPU and GPU. And, before Intel introduced their integrated GPU-CPU, they used an AMD GPU in a multi-chip package.

AMD’s patent for an APU of chiplets (Source: AMD)


The logic of chiplets (higher yield), asynchronous development, power management, memory access, etc., are apparent. But, ironically, we’ve come full circle.

AMD has been doing chiplets for the last five years in our desktops. The concept of having the chiplet strategy moving across corporate boundaries is new, but the semi-companies see the logical benefits.

Nvidia holds the title of the giant chip with their gigantic A100 Ampere weighing in at 826 mm2, in a 7nm process containing 54.2 BILLION transistors. As impressive as that sounds, it’s also scary—that’s 54.2 billion potential failure points which makes the odds of getting all those transistors running at all, let alone to spec is risky. Semiconductor companies mitigate the probability factors by building redundancies, which saves a lot of chips. The second line of defense is binning and using a part in a lower-class category to salvage the bits that do work.

MCM-GPU: Aggregating GPU modules and DRAM on a single package (Source ISCA/Nvidia)


Chiplets offset the issue of astronomical potential failure points by simply having less of them. That means the yield is higher and the ROI better as testing and retesting expenses are reduced. But the chiplets bring new problems with them such as inter-chip communications. That’s where AMD and Intel’s fancy packaging techniques have developed, and Nvidia will soon announce, come in. Intel has two new technologies, EMIB (embedded multi-die interconnect bridge) and Foveros. (Also, see: Intel brings a bonanza to ISSCC 2022)

There’s no winner-loser scenario in this; it’s just sound engineering and manufacturing. It’s not much different than moving to a process node—not easy, but manageable and with significant benefits, albeit some startup costs.

So we went from small chips to larger ones, to multi-chip, and then gigantic chips, and now back to a cluster of smaller ones, smaller chips with tiny process nodes. The semiconductor business has to be one of the most interesting, challenging, and exciting industries there is.