Posted: Jon Peddie 09.16.18
The original use and development for the GPU was to accelerate 3D games and rendering. The acceleration of the game’s 3D models involved geometry processing, matrix math, and sorting. Rendering involved polishing pixels and hiding some of them. Two distinctive, non-complimentary tasks, but both served admirably by a high-speed parallel processor configured as a SIMD — same instruction, multiple data, architecture. The processors were used in shading applications and became known as Shaders. Those early GPUs were applied to graphics add-in boards (AIBs) and served their users very well.
It didn’t take long for the mass-produced GPU, which enjoyed the same economy of scale as the ubiquitous x86 processor, to be recognized as a highly cost-effective processor with massive compute-density. As such it was applied as a compute accelerator, and other than an awkward programming interface that only a coder could love, it exceeded the expectations of the users, and the suppliers. GPUs ultimately found their way into the top 10 of the 500 supercomputers, year after year.
GPUs were also applied to image-processing workloads in high-end, ultra-high-resolution cameras, robotic cameras, and cameras in smartphones. That then lead to the application of GPUs in machine learning and AI, both for training and inference.
And it didn’t stop there. GPUs were placed in servers in the datacenter for bursty projects like film rendering as a service from the merchant cloud providers, which led to the idea of making a remote GPU a virtual GPU bringing the power of a big (and usually expensive) GPUs to users who only needed that power occasionally, or a user that just didn’t have the budget or space for a powerful local GPU.
GPUs then found their way into the x86 CPU, as well as ARM-based SoCs, in the form of shared memory integrated GPUs.
As laptops became notebooks, thin and light, the space, power, and heat dissipation needed for a powerful GPU became problematic. Experiments were tried, including the high-speed interconnection used by GPUs known as PCIe which was introduced by Intel in 2004, and used for connecting peripherals like AIBs in desktops, but the complexities of cabling, connectors, and line drivers proved to be too expensive and too cumbersome to be effective for thin and light notebooks.
Then USB-C/Thunderbolt was introduced and changed the equation. Now PCIe signals could be transported across a low-cost high-bandwidth cable and connector making the external AIB/GPU a practical docking option for the thin and light notebooks.
The GPU has been used in so many configurations, and applications, it has become necessary to use a prefix to designate which type of GPU and application one is referring to and so we have the following:
- dGPU — the basic, discrete (stand-alone) processor that always had its own private high-speed (GDDR) memory. dGPUs are applied to AIBs and system boards in notebooks.
- iGPU — a scaled down version, with fewer shaders (processors) than a discrete GPU which uses shared local RAM (DDR) with the CPU.
- vGPU— an AIB with a powerful dGPU located remotely in the cloud or a campus server.
- eGPU — an AIB with a dGPU located in a stand-alone cabinet (typically called a breadbox) and used as an external booster and docking station for a notebook
- Schematically, the various GPUs look like the following diagram.
GPUs are in PCs, in the form of dGPUs and iGPUs and often both are present in a PC at the same time.
GPUs are in smartphones and tablets as part of a Soc.
GPUs are in today’s modern game consoles and are being integrated into automobiles for entertainment systems, customizable dashboards, and the exciting world of autonomous driving.
GPUs power supercomputers, servers, cameras, scientific instruments, airplane and ship cockpits, robots, TVs, digital cinema projectors, visualization, simulation, VR and AR systems, and various toys and home security devices.
And it started because there was a need and demand to have faster, more realistic games. But the GPU market is far from a game, it is a mission-critical, market with high demands, high-stakes, and extraordinary development and advancement exceeding Moore’s law by orders of magnitude.
Look around, how many GPUs do think are in your life? Probably more than you’d imagine.