You can’t be too thin, too rich, or have too much memory, says Nvidia

How does 100TB sound to you?

Jon Peddie

Nvidia made significant announcements at Computex 2023, including the DGX GH200, a solution for GPU-accelerated computing. The massive memory system is designed to handle demanding AI workloads. The DGX system utilizes NVLink technology to interconnect GPUs, enabling high-speed memory access. To address the challenges posed by trillion-parameter AI models, Nvidia combined its Grace Hopper processor with the NVLink Switch System, resulting in the DGX GH200 with access to 144TB of memory. This makes it the first supercomputer to break the 100TB barrier for GPU-accessible memory over NVLink, resembling a data center-size GPU. The system’s Grace Hopper chips offer impressive memory

Enjoy full access with a TechWatch subscription!

TechWatch is the front line of JPR information gathering service, comprising current stories of interest to the graphics industry spanning the core areas of graphics hardware and software, workstations, gaming, and design.

A subscription to TechWatch includes 4 hours of consulting time to be used over the course of the subscription.

Already a subscriber? Login below

This content is restricted

Subscribe to TechWatch