Intel’s 9th gen has the data center covered

With 25 new products coming, Intel is going for breadth

Jon Peddie

Intel used an announcement about its new Cascade Lake-based Xeon 9200 Platinum to show off all the stuff they are bringing to the data center, which introduces many multiple SKUs and a range of supporting products with the intent of giving customers the ability to build custom server applications covering HPC, AI, edge, memory, and super-sized particle chasing servers. Intel has been transforming the company from a PC-centric company to a data-centric company, and we think its mission accomplished. 

Intel sees the data center as a $200 billion silicon market opportunity with a 9% CAGR.

Bob Swan, CEO of Intel, said onstage that Intel is targeting a total available market of $300 billion for data-centric products, far beyond the size of the market for personal computer chips that Intel has traditionally played in.

From that top line view, the company spent two days sharing their vision and developments with the analyst community using the new 9200 as the keystone of their presentation.

The 9200 is a multi-chip version of the 14-nm 8200 and results in providing 128 lanes of PCIe Gen4, with 56 cores running up to 3.8 GHz using Turbo Boost, a 77Mb cache, and a TDP of 400W. Making massive monolithic processors is way more costly than making lots more smaller ones as AMD has proved, and Intel has adopted the same philosophy with the 9200.

Intel’s Platinum 9200 consists of two die in a BGA package


Data can get from any CPU to the other or memory in a single hop. The processor can run 12 channel DDR4 at 2933 MT/s pe CPU.

Intel used a refined 14-nm process to squeeze a little more clock speed out of the cores and added DL Boost, new instructions to speed up machine learning inference.

That alone could have been a major announcement, but Intel knows a processor is only as good as the data it can process, and to ensure that the beast gets well fed, the company also introduced its persistent DMM-based memory they are calling Optane DC (for data-center), which is cache coherent to the processor, has two back up modules and ECC, and an expected lifetime of over five years with 100% read-write cycles and can provide up to 36 TB of memory when combined with traditional DRAM. However, both the Xeon and (and it new Agilex FPGAs) link to Intel’s Optane memories via Intel’s proprietary DDR-T protocol. That proprietary approach is not in keeping with Intel’s declaration of openness, and the recent announcement of CLX.

Intel’s Optane DC persistent memory provides a new capability for data center performance


Intel’s vice president of memory and storage, Alper Ilkbahar, says data is not only growing, but it’s also getting warmer — the data that is the most important is that data which is up close and personal with the CPU, the “warmer” it is. It is more useful because it can be accessed faster.  and they call that hot data. Data in the processor's cache is as hot as it gets, and the next level down is DMM DDR4 and soon 5, one more step down is Optane DC, and beyond that is SSD and then HDD, and then tape, and then punch cards, and finally stone tablets. 

The Intel SSD D5-P4326 (QLC 3D NAND) is an addition to the industry’s first-to-market PCIe QLC SSDs. The drive and innovative “ruler” form factor


Next, you also must get the data to that memory from other processors, and to do that Intel is introducing three levels of NICs with the top of the line, the 800 series capable of 1 TBE. Not only does it have crazy high speed, but it can also provide private channels for high-priority data through Intel’s new Application Device Queues (ADQ) technology. 

The Intel Ethernet 800 Series controllers can support speeds of up to 100 Gbps, which is 4× to 10× more server network bandwidth than many companies have deployed


The 700 series offers 40 TBE and the 500 can deliver 10 TBE—who needs Mellanox? (The Israeli-based company Nvidia snatched out of Intel’s grasp for its Ethernet and InfiniBand technology) Intel has shipped over 1.3 billion Ethernet ports. Before you send or receive that data you might to ensure its quality, which means making sure it’s not been corrupted by viruses or hack attacks. Check, says Intel and they showed their SGX hardened software guarded card. It has 17 new architecture instructions that can be used by applications to set aside private regions of code and data and can prevent direct attacks on executing code or data stored in memory.

For specialized applications like AI, Intel has all those bases covered too. Intel says the workload between training and inference was about 50-50 in 2018 and will stay at that ratio until at least 2022. The company thinks the market was worth $3.8 billion in 2018 and will grow to $8 to $10B by 2022.

Intel’s estimation of the split in the AI market


For the training side, they offer Xeon, which the company claims is the number one workhouse for AI training today, and for more specialized needs they offer their offer their Nervana, and Mobileye, as well as a new product from their FPGA group which is being called Agilex. Other inference offerings include the ever-popular Mobileye processors for automotive and other vehicles, as well as the Nervana and the Modivia inferencing processors.

Agilex is a center core of programmable elements, surrounded by little special function processors. This processor can be used in both training and inference. Strangely Intel did not include their Xe dGPU processors, nor did they include them among the associated processors for the Agilex. The processor has a cache-coherent processor bus to link it to its Xeon CPUs and Optane DC. 

Intel’s new AGilex FPGA with CLX, DDR5, and PCIe 5.0 interconnects


Agilex is Intel’s first fully-designed FPGA and uses Intel’s 10-nm process. This new range of products is set to roll out later this year for sampling, and offer a mix of analog, digital, memory, custom IO, and eASIC variations within a singular platform.

Intel acquired eASIC last year. The team has developed FPGA-like design tools to produce structured ASICs. A structured ASICs is an intermediary between a full FPGA and a full ASIC that allow for quick production and less expensive production cost. Intel has been using eASIC technology since 2015 in its custom Xeons.

The Intel Agilex FPGAs take advantage of  Intel’s heterogeneous 3D packaging technology, which provides FPGAs with application-specific optimization.

Intel has learned what’s important to data center operators by working with their customer’s customers. The one we found most exciting is the contribution Intel has made to the next-gen Large Hadron Collider (LHC) which is considered one of if not the most significant developments of this time and credited with finding the Higgs-Boson God Particle. LHC collision data was being produced at approximately 25 petabytes per year. As of 2017, The LHC Computing Grid is the world's largest computing grid comprising over 170 computing facilities in a worldwide network across 42 countries.