AI is reshaping what CPUs, NPUs, and edge SoCs actually do—and how much they matter. Intel is pushing deterministic, AI-ready edge processors with 180 TOPS in a single SoC. Meta just signed a multibillion-dollar deal for Amazon’s Graviton CPUs to run AI agents, signaling that GPUs no longer own the entire AI infrastructure stack. CPU demand is so hot that Intel sold binned edge-die chips that would normally have been scrapped. The AI silicon story is getting more complex, more distributed, and more interesting by the quarter.

Intel has advanced two separate but related processor families targeting the industrial edge, and the distinction between them matters.
The Intel Core Series 2 targets deterministic edge deployments—manufacturing lines, industrial controllers, and infrastructure systems where timing predictability outweighs raw throughput. Intel Time Coordinated Computing and time-sensitive networking deliver time-aware, repeatable execution cycles that general-purpose processors cannot guarantee. Against AMD’s Ryzen 9700X at equivalent power, the Core Series 2 delivers 2.5× more deterministic scheduling, 3.8× better predictable performance under load, and 4.4× lower maximum PCIe latency. Those aren’t abstract benchmarks—three Intel customers quantify the gains directly. Neurocle cuts inference latency 1.4× on deep learning inspection models for manufacturing defect detection. Saimos achieves 2.3× thread-per-channel efficiency gains, fitting more camera analytics on the same hardware budget. Codesys increases virtual PLC density 1.6×, shrinking cabinet size and wiring complexity for industrial designs. Ten-year availability and backward compatibility close the argument for embedded customers that can’t retool platforms every product cycle.
The Intel Core Ultra Series 3 for Edge addresses a different problem entirely. Physical AI and agentic AI workloads—robotics, computer vision, generative inference, patient monitoring, loss prevention—need acceleration that fits a low-power envelope without a discrete GPU. The Core Ultra Series 3 delivers up to 180 TOPS of integrated AI acceleration from a single SoC combining up to 16 CPU cores, a dedicated NPU for low-power inference, and up to 12 Xe GPU cores for video analytics and high-throughput AI. That architecture runs Vision-Language models and Vision-Language-Action models in real-world edge conditions where cloud connectivity is unreliable or unacceptable.
The NPU tier framework emerging across the laptop and edge SoC market now defines three distinct performance bands. Below 40 platform TOPS—covering Intel Meteor Lake at 34 TOPS, Arrow Lake at 36 TOPS, and AMD Hawk Point at 39 TOPS—sit platforms with dedicated NPUs that nonetheless fall below the Copilot+ certification threshold. The 40–99 TOPS Copilot+ band covers AMD Ryzen AI 300 at 73–81 platform TOPS, AMD Ryzen AI 400 at an estimated 85–90 TOPS, and Qualcomm Snapdragon X Elite at 75 TOPS. Above 100 TOPS, Intel Lunar Lake reaches 120 platform TOPS today; Intel Panther Lake targets approximately 180 platform TOPS from H1 2026. A key insight: iGPU tier and NPU tier do not move together. Arrow Lake dominates commercial laptop volumes in 2025–2026 but sits in the N1 Basic band. Lunar Lake reaches N3 Advanced despite occupying a midrange iGPU tier.
Meanwhile, CPU demand outside the edge is running at scarcity levels. Meta signed a multibillion-dollar, multi-year agreement to deploy tens of millions of AWS Graviton5 CPU cores for AI agent workloads, joining prior agreements with Nvidia, AMD, and Arm. The driver is agentic AI—agents performing multi-step tasks require CPU-class processing to orchestrate, schedule, and feed GPU execution. CPUs also anchor the post-training phase of large language model development where pretrained models get fine-tuned toward specific objectives. Intel confirmed the demand environment directly: In Q1 2026, the company sold binned edge-die product and shelved legacy SKUs that previously had no market, turning wafer scrap into incremental revenue. Customers absorbed everything available. Intel does not expect the same finished-goods opportunity in Q2, meaning forward volume growth depends entirely on increasing fab output.
However, Intel wanted to manage investor and customer expectations, and said, “But what we were able to do in the first quarter was go through finished goods inventory and find opportunities to sell product we didn’t think we would be able to move. It was either de-spec’d product or it was legacy product we had shelved, and then worked with customers and found opportunities for them to leverage that technology in their system. So, that helped out a lot. I’m not sure we have that benefit in the second quarter. So, obviously, we will scrutinize our finished goods inventory to see if we can find some opportunities. But for the most part, what we’re relying on from a volume growth perspective, Q2 versus Q1, is going to be increasing supply.”
What do we think?
The CPU renaissance in AI is real and accelerating. Meta choosing AWS Graviton5 over its own infrastructure for agent workloads signals that no single architecture handles every AI task efficiently. Intel’s edge SoC strategy—pairing deterministic scheduling with integrated NPU acceleration—addresses a genuine gap that discrete GPU deployments cannot fill economically. The binned-die sellout is the clearest demand signal of the cycle: When customers buy scrap chips, supply is the constraint, not demand.
The convergence of agentic AI demand, edge NPU deployment, and CPU scarcity marks a genuine inflection point in how AI silicon gets allocated. For a decade, GPU allocation determined AI capability. That model is fragmenting. Agents need CPUs. Edge deployments need deterministic SoCs. Inference at scale needs memory bandwidth. The inflection point is not a single product or platform—it is the disaggregation of AI compute into at least four distinct workload types, each pulling on different silicon, different vendors, and different supply chains simultaneously.

Charge! To the Edge!
Intel is among the 141 AI processor companies looking for customers in the AI market. The combined companies are offering 202 AIP solutions, and as indicated by Intel’s move, the demand is there. You can follow this enormous and changing market with our AI Processor Tracker Service or one of the other reports in our AIP library.
IF YOU FOUND THIS INTERESTING, TELL YOUR FRIENDS. IF IT MADE YOUR HEAD HURT, TELL YOUR ENEMIES.