This week the world was introduced to the next, and possibly most significant, inflection point in the PC industry since dual-core—the embedding of graphics processors with the CPU.
There has been an inevitable march of more and more integration of peripheral components to the CPU. So it should be no surprise to anybody that graphics are being integrated into the main processor.
However, since its introduction, the graphics processor unit, GPU, grew in greater complexity than the CPU during the past eight years, exceeding the transistor count, and matching or exceeding the die size. Many people thought that due to the level of complexity, power and cooling demands, and the faster cadence of GPU design cycles to that of the CPU, the two would never be able to cohabitate. And true, the first concepts and test implementations did compromise the GPU functionality in order to accomplish integration. However, with four times the number transistors possible in the same space as the previous manufacturing node or feature space, the compute density demanded by GPUs suddenly becomes not just feasible, but completely possible, and practical—although not necessarily easy.
Two routes: EPG and HPU
The leading companies integrating graphics within the CPU are AMD and Intel. Via, IBM, and other x86 compatible suppliers haven’t given any indications of their efforts or interest in such integration.
In the PC and adjacent (embedded, POS, industrial, etc.) market, AMD and Intel are taking different routes. AMD is making a massive GPU integration; Intel is making an evolutionary graphics integration.
Heterogeneous Processor Unit—HPU. AMD is betting on being its own best competition in the low-end and midrange discrete GPU market. Leveraging its compute density success with its discrete GPUs, AMD will transfer that technology to the CPU in what they are calling an Accelerated Processor Unit—APU; what we have designated a HPU.
Embedded Processor Graphics—EPG. Intel believes consumers will not make a conscious purchasing choice in the market below the Performance segment and for Performance, they will choose the best CPU and an add-in-board (AIB.) For the non-graphics functions that a GPU could aid (i.e., transcoding) in consumer systems, Intel will embed function-specific blocks in its Sandybridge processor due to debut in the first quarter of 2011. Traditionally, in electronics and semiconductors, a dedicated function, known as a state-machine, can offer a faster, less expensive solution than a programmable device. Therefore, Intel will not provide a massively parallel processor in their CPUs (as AMD is doing) and instead will provide their traditional “good-enough” graphics approach with embedded processor graphics—an EPG. This philosophy has served Intel well in the past, and so they will refer to their chips as “Processor Graphics,” or PG. Intel will not recognize the term EPG, HPU or AMD’s marketing term “APU.”
As for GPU-compute functions, Intel believes the people who employ such systems represent a small market and will use a high-end AIB rather than a low-end or midrange GPU (embedded in a processor) for such tasks. As for less demanding compute functions that might benefit from a parallel processor architecture such as encryption, video and photo filtering, or even physics, Intel will develop specialized API callable function blocks. Intel thinks this will save silicon, power requirements, and costs while offering equal to superior performance.
These divergent paths further illustrate the inflection point the industry is about to experience. Whereas AMD sees the world as needing a powerful SIMD parallel processor array, Intel sees it as the normal, historical path of natural evolution of functions—just as the numerical processor, the cache controller, memory controller and soon transcoder have been assimilated into the CPU, so will other acceleration functions but in a semi-dedicated fashion rather than a complex general-purpose parallel processor fashion.
And thus the battle begins
An HPU is a processor capable of performing massive parallel processing operations while simultaneously performing complex high-speed scalar operations on multiple cores.
The HPU will truly revolutionize the PC and associated industries and many new applications will come forth to take advantage of this new low-cost powerful multi-faceted compute engine.
The adaptation, understanding, and implementation of GPU-compute applications, both new and retrofitted, are just beginning to take off. The primary benefactors and so far only suppliers of this new wave of powerful computational capability have been the suppliers of the large-scale GPUs. Light-weight consumer applications that benefit from parallel processing like transcoding, image processing and enhancement, and of course games, will see immediate benefit from the HPUs, and that will have a snowball effect on the market and suppliers.
However, with two solutions to these applications: EPG and HPU, the software developers face the dilemma of having to support two approaches, and given the costs of software development and maintenance many will choose not to do so. The HPU solution will have to show significant and compelling reasons to the ISV community to get accepted.
A new report
We’ve just generated a market study on the EPG and HPU, and their impact on IGPs, and discrete GPUs titled: Opportunities, Threats, and Changes Created by the EPG & HPU—Tension at the Inflection point. We have more information about it on our website. s