News

Intel AI at SC23

Gaudi, Max, and Xeon will find your tumors and write your term papers.

Jon Peddie

At the SC supercomputing conference this year (SC23) in Denver, Intel showed high-performance computing acceleration solutions for AI workloads, including the GPU Max Series, Gaudi2 AI accelerators, and Xeon processors. The company highlighted the Aurora generative AI project, achieving a 1 trillion-parameter GPT-3 LLM on the Aurora supercomputer with Intel’s Max Series GPU. They also showcased accelerated scientific applications and outlined plans for Gaudi3 AI accelerators and Falcon Shores.

What do we think? The Aurora supercomputer is one of the wonders of the world and will deliver astonishing results for years to come. It is a major showcase for Intel’s GPUs and positions the company as a world-class AI GPU accelerator supplier. Aurora looks to be the gift that keeps on giving for Intel.

Intel advances scientific research and performance for new wave of supercomputers

During SC23 in Denver, Intel presented high-performance computing (HPC) acceleration solutions for AI workloads on Intel Data Center GPU Max Series, Intel Gaudi2 AI accelerators, and Intel Xeon processors. Intel, in collaboration with Argonne National Laboratory, shared updates on the Aurora generative AI (GenAI) project, including claims of a 1 trillion-parameter GPT-3 LLM running on the Aurora supercomputer. That achievement, said Intel, was made possible due to the Max Series GPU and the system capabilities of the Aurora supercomputer. Intel and Argonne also showcased the acceleration of scientific applications from the Aurora Early Science Program (ESP) and the Exascale Computing Project, and outlined the road map for Intel’s Gaudi3 AI accelerators and Falcon Shores.

Intel CPU
Intel’s 4th generation Xeon processor at SC23. (Source: Intel)

Intel says generative AI for science, along with the latest performance and benchmark results, underscores the company’s ability to deliver tailored solutions to meet the specific needs of HPC and AI customers.

The company claims its software-defined approach with oneAPI and HPC AI-enhanced toolkits, help developers port their code across architectural frameworks to accelerate scientific research. Additionally, says Intel, Max Series GPUs and CPUs will be deployed in multiple supercomputers that are coming online.

Intel semiconductor
Intel’s Gaudi2 AI accelerator. (Source: Intel)

Intel also reported on the latest results at the Argonne National Laboratory’s Aurora supercomputer that Intel processors are powering. The Aurora GenAI project is a collaboration with Argonne, Intel, and Hewlett Packard Enterprise (HPE). It has been commissioned to create state-of-the-art foundational AI models for scientists. The models will be trained on scientific texts, code and science datasets at scales of more than 1 trillion parameters from diverse scientific domains. Using the foundational technologies of the Megatron language model (named after the evil robot in Transformers) with DeepSpeed (developed by Nvidia), the GenAI project will service multiple scientific disciplines, including biology, cancer research, climate science, cosmology, and materials science.

The Intel Max Series GPU and the Aurora supercomputer system capabilities, says Intel, can handle 1 trillion-parameter models with just 64 nodes, far fewer than would be typically required. Argonne National Laboratory ran four instances on 256 nodes, demonstrating the ability to run multiple instances in parallel on Aurora. That, says Argonne, paves the path to scale the training of trillions of parameter models more quickly with trillions of tokens on more than 10,000 nodes.