Posted: By Jon Peddie 06.23.20
|The Krell supercomputer (1956)|
There’s 500 of them! Five hundred. Do I really give a hoot? If I told you, there are five hundred top baseball players, think you’ve given a… after I named number five or six? Hell, I’d bail out after number three. I mean really, what’s a supercomputer ever done for me?
But because we’re an industry obsessed with numbers, and bigger is always, ALWAYS better, then we have to have contests and benchmarks, and winners and losers. So here are the top five winners in the supercomputer race. (Try to stay awake reading it.)
The Arm-based Fugaku was a big surprise. It got a benchmark score of 415.5 petaflops based on the Linpack (HPL) test, a measure of a system's floating-point computing power. The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.
Fugaku, which means Mount Fuji in Japanese, pushed aside the Summit system, an IBM-built supercomputer sucking up megawatts at Oak Ridge National Laboratory (ORNL) in Tennessee, pushing it into second place.
Summit, which can only produce a measly 148.8 petaflops on HPL, has 4,356 nodes with two 22-core Power9 CPUs, and six Nvidia Tesla V100 GPUs.
Fugaku, powered by Fujitsu’s 48-core Arm A64FX SoC peak performance, is over 1,000 petaflops (1 exaflop) and no GPUs. What! No GPUs, how the hell can it work?
Ok (yawn), and then the number three superduper-computer is the Sierra system at Lawrence Livermore National Laboratory (LLNL) in California. It does have GPUs, four Nvidia Tesla V100 GPUs in each of its 4,320 nodes. This lackluster multi-multi million dollar US taxpayer paid for a machine that can only deliver 94.6 petaflops using two Power9 CPUs per node.
We’ve gone this far, might as well finish. Number four, if you’re still reading is that sluggish old Sunway TaihuLight system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC). The system is powered entirely by Sunway 260-core SW26010 processors—again, no GPUs, what the hell were they thinking of? Its HPL mark of 93 petaflops has remained unchanged since it was installed at the National Supercomputing Center in Wuxi, China in June 2016. 93 PFLOPS, that is so 2015.
OK, one more and that’s it. Yes, folks, it’s the old Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT). It can be cranked up to 61.4 petaflops using a hybrid design with Intel Xeon CPUs and custom-built Matrix-2000 coprocessors. It’s deployed at the National Supercomputer Center in Guangzhou, China.
Wow—that was pretty exciting. Wasn’t it?
Now, what do these machines do to justify the time and money spent on them?
Probably the closest thing we mere mortal can relate to is supercomputers are used to provide weather forecasts. That’s not because our government wants to help farmers or you on your commute, but because it wants to give the military an accurate forecast. The same way we got the GPS.
Supercomputers are also used to help create new medicines and in protein and virus research. Viruses like the COVID-19. Simulations of things and events that are too dangerous, or fast, or small, can be tested, like what happens 5-picoseconds after a nuclear bomb is set off? Or how fast does a protein fold and eat whatever in its grasp? New materials are examined, strategic scenarios tested, financial trading and currency exchange forecast tested, and pandemic models are run on them.
You know, the kind of stuff you and I do every day in Excel—or maybe not.
Someone said that god lives in supercomputers and he and they are the only things that can see the truth. Hmm, maybe I said that.
And then there’s the national pride thing. No better way to spend a nation’s wealth than a good old race to be number one. If nation X has a faster supercomputer and can run nuclear explosion simulations faster than we can, why then they could…
But we could also crunch the reams of data spewing out of CERN faster too, and who knows what might come out of that? Maybe a new sub-sub subatomic particle named after some dead genius. Or maybe a cure for HIV, or Parkinson’s.
So yeah, we average Janes and Joes can’t relate to something being computed a trillion billion times in one second, but as mind-bending as those numbers are, they still, believe it or not, aren’t good enough. So next year we’ll be talking about, but not really understanding, the double-digit Exascale supercomputer just commissioned.
We can’t stop any more than we can stop building skyscrapers, better cars, bigger TVs, of rifle shot drugs that kill cancer as an outpatient treatment.
Onward, may the FLOPS be with you.