News

Semidynamics and SiPearl Aim for Sovereign AI at Rack Scale

Europe wants its own AI stack

Jon Peddie and David Harold

Semidynamics and SiPearl have announced a strategic cooperation to develop a European rack-scale AI compute platform for large-scale inference. The system will combine SiPearl’s Arm-based CPU technology with Semidynamics’ RISC-V-based GPU/AI inference ASIC in an OCP-aligned rack design. The target is Europe’s emerging AI Factory and Giga Factory infrastructure, where sovereignty, power efficiency, and control of the compute stack are becoming procurement issues.

Semidynamics and SiPearl CEOs seal the deal with a handshake
Philippe Notton, SiPearl’s CEO and founder, left, & Roger Espasa, Semidynamics’ CEO (Source: Semidynamics)

Semidynamics (Barcelona, Spain) and SiPearl (Maisons-Lafitte, France)  have formed a strategic partnership to develop a European rack-scale AI compute platform for cloud and enterprise deployments.

SiPearl will provide Arm-based CPU technology for host compute, orchestration, and data plane management. Semidynamics will provide its RISC-V-based GPU/AI inference ASIC as the main acceleration engine. The two companies will also coordinate reference architecture work, marketing, sales activity, and tender responses.

The proposed system is aimed at European AI infrastructure programs, including AI Factory and Giga Factory initiatives. It will be based on Open Compute Project standards. OCP gives the design a more familiar path into real data centers, where operators are unlikely to welcome anything that looks like a science project in the rack. The companies say the platform is intended to deliver rack-scale density comparable to leading global AI systems.

Target workloads include LLM deployment, retrieval-augmented generation pipelines, enterprise automation, industrial analytics, and public sector applications where data sovereignty is included in the requirements. That is where AI becomes an operating cost. Training is a big money job, but it is inference that really defines the economics of deployment.

In the first version, SiPearl supplies the CPU technology and platform support, while Semidynamics supplies the accelerator and leads enclosure and rack integration. A later phase will move toward chiplet-level integration. The chiplet story is being held back for later.

SiPearl brings the CPU side of the European sovereignty story. The company emerged from the European Processor Initiative and is developing energy-efficient processors for sovereign HPC, AI, and data center use. Its first-generation Rhea1 CPU uses 80 Arm Neoverse V1 cores and is expected to support European exascale systems.

Semidynamics brings the piece that does the heavy lifting. The company has built its story around memory-centric architecture, arguing that inference performance depends less on peak arithmetic and more on keeping data moving. Its Gazzillion memory subsystem is designed to tolerate long-latency memory access and reduce the penalty of waiting on data.

That is not a side issue. Large-scale inference is increasingly a memory problem. Longer context windows, larger KV caches, and production LLM serving all stress memory capacity, bandwidth, and system utilization. A faster accelerator does not help much if the system is stalled.

Philippe Notton, SiPearl’s CEO and founder, framed the collaboration as evidence that work from the European Processor Initiative and the EU sovereign ecosystem is beginning to produce practical platforms. Roger Espasa, Semidynamics’ CEO, said the combination gives Europe “a credible path” toward sovereign rack-scale AI infrastructure built around European-controlled compute.

What do we think?

For once, the sovereignty pitch has some machinery attached.

SiPearl handles the CPU and host side. Semidynamics supplies the accelerator. OCP gives the rack design a route into data centers without asking customers to accept an exotic deployment model. That does not make the system easy to build, but it does make the pitch more coherent than the usual European autonomy language.

Inference is where AI infrastructure moves from ambition to electricity bills. Customers need systems that can serve models efficiently, feed memory properly, and avoid turning racks into expensive heaters. Semidynamics’ memory-centric message fits that shift. The company has been talking about the memory wall for some time; inference gives that argument a large and impatient market.

Semidynamics is also on a news roll. Its recent 3nm tape-out with TSMC marked a shift toward a more explicit silicon strategy, not just a RISC-V IP story. The strategic investment from SK Hynix adds weight, because memory companies understand where AI systems are hurting. SK Hynix is looking at the same bottleneck everyone else is: moving and feeding data efficiently.

That makes the SiPearl partnership interesting. Semidynamics is starting to look less like a processor IP company with accelerator ambitions and more like a systems player trying to assemble the pieces around inference. Tape-out, memory backing, and now a European rack-scale platform partner.

SiPearl benefits too. Its CPU roadmap needs visible AI infrastructure relevance beyond European HPC programs. Pairing with Semidynamics gives SiPearl a clearer role in the AI Factory conversation and a way to present its CPU as part of a larger compute platform rather than a sovereign checkbox. The catch, as ever, is proving the stack. Nvidia’s advantage is not just silicon; it is the installed base, libraries, systems, networking, and developer gravity. Europe cannot policy its way around that. Still, this is more than Brussels mood music, provided the silicon and software arrive. Semidynamics and SiPearl are putting European CPU and accelerator technology into one rack-scale proposal, aimed at a market Europe is actively funding.