Ambiq’s Atomiq is an ultra-low-power system-on-chip with a built-in NPU, claiming a first for always-on AI acceleration using subthreshold voltage logic. Designed to run real-time voice, vision, and sensor models on-device, Atomiq is built on Ambiq’s SPOT platform and integrates Arm’s Ethos-U85 NPU. Ambiq plans expansion from sensor control to full AI inference at the edge, targeting wearables, AR glasses, and industrial monitoring.

(Source: Ambiq)
At CES 2026, Ambiq unveiled Atomiq, a system-on-chip for AI at the edge whose focus is on consuming less power for a given amount of compute. Built on Ambiq’s Subthreshold Power Optimized Technology (SPOT), the chip integrates an Arm Ethos-U85 NPU and promises over 200 GOPS of performance within a strict power budget. Potentially it will unlock new classes of battery-powered AI devices.
Atomiq is Ambiq’s first chip to combine its sub- and near-threshold logic designs with dedicated neural acceleration. This architectural approach is meant to sustain always-on AI tasks such as speech recognition, computer vision, and sensor fusion. The NPU supports on-the-fly weight decompression and sparsity, offering support for larger models within the constrained memory and power envelope.
How do they do this? There’s a wide-range dynamic voltage and frequency scaling mechanism that allows the chip to dial performance up or down as needed. The Helia software stack, ADKs, and neuralSPOT SDK provide turnkey support for real-world deployment.
Ambiq’s CTO and founder, Scott Hanson, sees Atomiq as a shift from the Apollo generation’s sensor-centric MCUs toward a platform for true ambient intelligence. “We enable significantly larger AI models at the edge,” he says, “with industry-leading energy efficiency.”
What do we think?
Ambiq is playing to its strengths. The company has long focused on ultra-low-power SoCs, powering over 280 million devices. With Atomiq, it’s not trying to compete on peak TOPS, but on usable, persistent AI in tight power envelopes, something that’s been elusive as edge AI models grow.
Atomiq’s reliance on the Ethos-U85 NPU and its support for compressed models and sparsity show that Ambiq understands the trade-offs: You won’t run a Llama model on this chip, but you can keep a smart camera or a voice assistant running continuously on a coin cell.
The inclusion of dynamic power scaling and the growing Helia software ecosystem give Atomiq a solid foundation for adoption. And with CES demos from partners Bravechip and Ronds showing real-world use in smart rings and industrial sensors, this isn’t vaporware.
It’s notable that Ambiq is doubling down on its SPOT platform even as other players like Qualcomm push high-performance NPUs into wearables, and start-ups like Axelera and Innatera offer more exotic architectures (DIMC, spiking neurons). Ambiq’s approach is conservative but proven. And the addition of on-device AI acceleration via a mainstream NPU brings it into competition with the likes of Syntiant, Edge Impulse, and Arm Cortex-M AI extensions.
The roadmap mention of a 12 nm SPOT update in March will be worth watching. If Ambiq can scale this architecture while maintaining efficiency, it could own the sweet spot of AI just good enough for always-on, edge-native intelligence.
YOU LIKE THIS KIND OF STUFF? WE HAVE LOTS MORE. TELL YOUR FRIENDS, WE LOVE MEETING NEW PEOPLE.