News

Nvidia is doing to HPC/AI what they did to gaming

Revolutionizing it I could watch Jensen Huang read the damn phonebook and be enthralled. He’d make me willingly, and enthusiastically believe that not only did Nvidia invent the phone and the phone book, but all the people in it as well. And, as soon as the presentation was over, I’d rush out and buy a phone book, and a phone—but only if ...

Jon Peddie

Revolutionizing it

I could watch Jensen Huang read the damn phonebook and be enthralled. He’d make me willingly, and enthusiastically believe that not only did Nvidia invent the phone and the phone book, but all the people in it as well. And, as soon as the presentation was over, I’d rush out and buy a phone book, and a phone—but only if it had a little green eye on it.

At SC17 Captain Huang, fashion darling that he is in his latest black leather jacket, armed only with a hand mic, paced the stage and showed us the powerful, but not power consuming DGX-1 replacing six racks of conventional servers, while delivering the same or more TFLOPS

Nvidia is bringing supercomputing to the masses. Not the common folks like you and me, but the masses of scientists, engineers, mathematicians, and researchers. Those smart guys struggle with funding, and cycle-allocations to solve big problems. Imagine how fast science and society could progress if researchers could get enough machine time on a super powerful machine? They could either get the job done in their life time, or expand the resolution and accuracy parameters to get even better answers.

What if they could just tap into a supercomputer without even knowing where it was? Well they can now because Nvidia’s Voltas and/or DGXs have been installed in all the major clouds worldwide. Amazon Web Services led the way in late October with an announcement that it would offer Volta in its cloud. Since then, Microsoft announced it would offer it in the Azure cloud. In fact, Nvidia says every major cloud is on board, including Alibaba, Baidu, Oracle, and Tencent Cloud also have announced cloud services based on Volta. Not only that, but you can access to a Volta for as low as $3 an hour, less than a café latte’.

And if you don’t want to use the cloud and want to own your own supercomputer than you can call Dell EMC, Hewlett-Packard Enterprise, IBM, Lenovo, or Huawei Technologies, and probably MicroStar.

“We’re in every cloud, every single server, every data center,” said Huang. That’s one hellofa statement.

One of the major applications that will exploit all these supercomputers is deep-learning and AI. AI as you know was just invented last week, and in that short amount of time dozens of specialized training programs have been developed by researchers, some which can only be loaded and run by advanced level researchers because the programs are so complicated. That’s a speed-bump in the road to progress and so Nvidia sought a way to accelerate the process. After all, what good is a supercomputer if it’s just idling while the researcher compiles, links and does other mundane setup operations instead of crunching and then analyzing numbers?

The answer is to stuff it. Yes, stuff it in a container. Nvidia took it upon themselves to collect the most popular, and a few esoteric AI and DL programs and bind them up in a simple to use container, with common I/O and file links. Not trivial or fast work, the effort has been a major investment using expensive engineer time, and a gift to the world from Nvidia. Just as an FYI, I recently participated in a discussion where we were told about a customer who spent six-months getting the software set up that took less than a day to run—that’s a terrible duty-cycle. Containers will take that six months and cut it down to six hours or less. Now extrapolate that – if a researcher could only run two or three analysis a year, now (assuming they have the dataset) they could run two or three a week. What would that do for society?

Jensen explains how containers will speed things up (Nvidia)

Nvidia is offering an end-to-end solution for researchers and scientists. They can take their datasets, stuff them into a container on a DGX or server full of Voltas, crunch the data, and then render a beautiful visualization on an Nvidia graphics board. No other company can, nor is, offering this range of capability to the scientific community—and at prices that are a fraction of what a conventional solution would have cost less than two years ago.

Go get a phone book and you can learn all about it.