Blog

2020’s GPU Compute add-in boards

FLOPS, TOPS, and pops compute accelerators

Jon Peddie

Nvidia has been the leader and dominant market supplier in GPU-compute AIBs for many years. so, it wasn't a surprise when the company announced its A100 AIB in September. Nvidia usually introduces new GPUs for the supercomputer conference held in mid-November. 

Intel’s announcement of its four-processor, XG310 GPU-compute AIB, did surprise us. And although AMD’s introduction of the MI100 wasn’t expected, it wasn’t surprising, either. 

The Supercomputer conference has been a focal point for new computing accelerators announcements. The AI training gold rush attracted several start-ups hoping to get a piece of the GPU pie. And there have been some impressive offerings and FPGAs. But it’s tough to match the compute density and FLOPS/dollar of a GPU that enjoys the economy of scale a GPU gives. Nvidia has always known this, so has AMD, and now Intel does as well. 

The three AIBs have assorted designs. AMD has adapted their Navi architecture and rebranded it as CDNA (compute DNA for the data center). It offers 7680 cores and claims the highest flops (11.5 TFLOPS). Intel has adapted an entry-level mobile dGPU and used a gang of them to increase performance. The total core count is only 3,072. For now, at least, Intel is focusing on the streaming market for cloud-based servers. Nvidia has introduced a broad function GPU-based compute accelerator. It has a big 336 matrix-math engine for AI they call Tensor, a big ray-tracing engine they brand RTX, and 6,912 core GPU. Nvidia also has the most on-board memory, 40GB of HBM2, while AMD has 32GB of HBM2, and Intel offers 8GB LPDDR. The following table summarizes the three entries. None of the companies have announced prices for their boards.

  AMD MI100 Intel XG310 Nvidia A100
TFLOPS 11.5 2.38 9.7
GPU cores 7680 3072 6912
Memory (GB) 32 8 48
Memory bus (bits) 4096 96 5120
Memory type HBM2 LPDDR4 HMB2
Matrix/ Tensor cores Unknown 0 432
RT cores 80 0 Unknown
Power (W) 300 300 250
Comparison of GPU-compute AIBs

 

Intel’s entry isn’t in the same class as AMD and Nvidia; they’re just getting started. Since they seem to be selling the AIB to only two Chinese game streamers, it’s curious why they even announced it. Using four GPUs with a PCIe switch and DDR4 RAM are drawbacks. A lower price won't be enough to compensate for it—not that we know what any of these boards sell for. Nonetheless, Intel plans to be a contender in the GPU-compute segment. The company points to their design win with their Ponte Vecchio AIB for the Aurora supercomputer as an example. That will be next year, however.

So now there are three contenders for the not very large GPU-compute accelerator market. AMD and Intel would seem to have an advantage due to their CPU offerings into the same market. Nvidia however has the reputation, brand, and technology, and soon may have its own Arm-based CPU offering. It’s likely the market may segment along traditional lines with Intel at the bottom and Nvidia at the top in terms of performance and just the opposite in terms of volumes.

Will Sony re-invent S3D?