Intel and Google have deepened their partnership around Xeon CPUs and custom IPUs, reinforcing the idea that CPUs remain central to modern AI infrastructure. We think hyperscalers want closer influence over silicon design, but suppliers still want products they can sell beyond a single customer.

Intel’s Lip-Bu Tan and his new bestie, Google’s Sundar Pichai.
Intel and Google have expanded their AI infrastructure partnership, with Google continuing to deploy Intel Xeon processors across cloud infrastructure while the two companies deepen co-development of custom infrastructure processing units, or IPUs. Intel said the multiyear collaboration will span multiple Xeon generations and is aimed at improving performance, efficiency, and total cost of ownership across Google’s infrastructure.
Intel and Google are making the case that AI infrastructure is becoming more heterogeneous. In that model, accelerators still matter, but so do CPUs for orchestration, data processing, and system-level control. IPUs then take on networking, storage, and security off-load, reducing the burden on the host CPU.
That matters because CPUs are very much back in the AI conversation. As AI shifts from pure model training toward more complex agentic workflows, demand for general-purpose compute is rising again.
The announcement lands in the shadow of Arm’s AGI CPU launch in March. Arm positioned that product as a purpose-built CPU for agentic AI infrastructure, claiming more than 2× performance per rack versus x86, and said Meta is its lead partner and co-developer. Arm also said more than 50 companies are supporting its broader move into silicon products.
Intel is not letting Arm’s framing dominate the conversation if it can help it. In comments reported by The Register, Intel Data Center Group Chief Kevork Kechichian questioned whether this supposedly new class of CPU is what hyperscalers or enterprises need, and argued that by Arm’s own logic, Intel already has products with similar characteristics, including high core counts and no SMT.
This deal is Intel asserting that the next phase of AI infrastructure will still leave room for x86, especially where buyers want continuity more than a clean-sheet shift in architecture. Google’s continued use of Xeon 6 in C4 and N4 instances helps make that case.
What Google gets is influence over future Xeon generations and deeper collaboration around custom IPUs, which is valuable to a