The suppliers had a great year and so did the customers
GPUs have found their way into just about every corner of science, industry, design, automotive, and entertainment. GPUs are the heart of the PC’s CPUs, the soul of SoCs, and the essence of AI, and DL. It is truly amazing how many problems can be solved using SIMD architecture. And yet for all its power and performance, there are those who would deny it its rightful place and accomplishment out of jealousy, failure, and spite. But denied it cannot be, there are too many things our lives today that depend on, need, and want that are delivered by a GPU to not pay respect.
The purveyors of GPUs, and GPU designs are few today, just nine: Qualcomm, Intel, Nvidia, and AMD design and manufacture GPUs, while ARM, Imagination, VeriSilicon, DMP, and ThinkSilicon offer GPU designs.
The GPU has evolved since its introduction in the late 1990s from a simple programmable geometry processor to an elaborate sea of 32-bit floating point processors running at multiple gigahertz speeds. The software supporting and exploiting the GPU, the programming tools, APIs, drivers, applications, and operating systems have also evolved in function, efficiency, and, unavoidably, complexity. The manufacturing of GPUs approaches science fiction with features below 10nm next year. They’re on a glide-path to 3nm, and some think even 1nm—Moore’s law is far from dead, but is getting trickier to tease out of the Genie’s bottle as we drive into subatomic realms that can only be modeled and not seen.
The notion of smaller platforms (e.g., mobile devices), or integrated graphics (e.g., CPU with GPU) “catching up” to desktop PC GPUs is absurd—Moore’s law works for all silicon devices. Intel’s best integrated GPU today is capable of producing 1152 GFLOPS, which is almost equivalent to a 2010 desktop PC discrete GPU (i.e., 1300 GFLOPS).
We acquire 90% of the information we digest through our eyes. Naturally we need abundant sources of information- generating devices to supply our unquenchable appetite for information. And the machines we’re building have a similar demand for information, though not alway visual information. In some case such as in robots and autonomous vehicles that’s exactly what they need. The GPU can not only generate pixels, but it can also process photons captured by sensors.
Scalability is the other big advantage the GPU has over most processors. To date there doesn’t seem to be an upper limit on how far a GPU can scale. The current crop of high-end GPUs has in excess of 3,000 32-bit floating-point processors, and the next generation will cross 5,000. That same design, if done properly, can be made with as few as two, or even one SIMD processor. The scalability of the GPU underscores the notion that one size does not fit all, nor does it have to. For example, Nvidia adopted a single GPU architecture forthe Logan design in and used it in the Tegra 1. AMD used their Radeon GPUs in their APUs.
It’s probably safe to say the GPU is the most versatile, scalable, manufacturable, and powerful processor ever built. Nvidia, which claims to have invented the term GPU (they didn’t, 3Dlabs did in 1993), built their first device with programmable transform and lighting capability in 1999, at 220nm. Since then the GPU, from all suppliers, has ridden the Moore’s law curve into ever smaller feature sizes, and in the process, delivering exponentially greater performance. Today’s high-end GPUs have over 15 billion transistors. The next generation is expected to feature as much as 32 GB with 2nd gen high-bandwidth memory (HBM2) VRAM and that will exceed 18 billion transistors.
GPUs have been one of the best, most significant developments in our lifetime.
A holiday gift
This is the last issue of the year, but we’re planning great stuff for subscribers in 2017. First up, we’re putting together a retrospective of the developments of GPUs over the last year. We’ll be offering this report to you free of charge, and we’re spending the holidays readying new reports for 2017 and we’ll have a brand new look. Stay tuned we can’t wait to tell you about it.