Cuda

Nvidia’s massive Ampere GPU

Nvidia has been pushing the envelope on graphics chips since it integrated the geometry processor and pixel shader into one chip and called it a GPU, 21 years ago. The potential for GPUs then vastly expanded in 2003 when a branch of development spiked out of GPU applications that took advantage of the GPUs parallel processing for pure computation using … Read more

AI accelerators and open software transform the computing landscape

Three years ago, we had maybe six or less AI accelerators, today there’s over two dozen, and more are coming. One of the first commercially available AI training accelerators was the GPU, and the undisputed leader of that segment was Nvidia. Nvidia was already preeminent in machine learning (ML) and deep learning (DL) applications and adding neural net acceleration was … Read more

Khronos standards for machine learning

At this year’s SIGGRAPH in Vancouver, The Khronos Group has launched its new interoperability standard for neural networks. The Neural Network Exchange Format (NNEF) is an open, implementation-independent way to describe neural networks designed to cut through the current tangle of framework-specific formats. The new standard was released in provisional form in December of 2017 and, after a period of … Read more

When Super gets Charged—Nvidia’s HGX-2

Nvidia just won’t quit and has built a double-decker server rack-mount console of Volta AIBs that can be used for AI training among things. The hardcore specifications are: AIBs: 16x Tesla V100s, providing 10,240 Tensor cores, plus 81,920 CUDA cores with 500GB of GPU memory Performance: 2 petaFLOPS AI | 250 teraFLOPS, FP32 | 125 teraFLOPS FP64 NVSwitch Communication Channel powered … Read more