Synaptics develops and sells touch screens, fingerprint sensors, and other technologies for human-computer interaction. Their products are used in smartphones, laptops, smart home devices, and more—and we covered them at CES and Embedded World in 2024.
Ahead of CES 2025, Synaptics announced a collaboration with Google to accelerate the development of edge AI solutions for the IoT. The partnership integrates Google’s ML core with Synaptics’ Astra AI-Native hardware and open-source software to enable multimodal processing for context-aware devices. Targeting applications in wearables, appliances, entertainment, automotive, and industrial systems, the collaboration leverages Synaptics’ expertise in low-power compute silicon and AI hardware alongside Google’s MLIR-compliant machine learning core. The combined platform aims to deliver efficient, scalable AI solutions, redefining user interaction and advancing innovation in edge IoT devices. We met at CES with Synaptics’ PR manager, Patrick Mannion, to get the latest on the company.

I’m very interested in how the project with Google is divided. Is most of the ownership with Synaptics? Has Google done anything to make it run particularly on your solutions? What is the nature of the relationship?
Patrick Mannion: What we’re doing is taking their open ML core and adding our own hardware engine as an enhancement. We’re making that open core run better for the set of operators that we support in the engine. There’s a lot of collaboration, especially around the compiler, which is open and based on frameworks like MLIR. Both teams work together to make all of it run seamlessly. Their portion of it is open, while ours is proprietary. Essentially, we take their core and optimize it to perform better on our SoCs.
And in terms of ML models, is this focused predominantly on vision and sensor applications, or do you see it being broader than that?
Mannion: Definitely vision, but I expect it to be multimodal—vision, audio, and even time-series data over time. There’s no real limit to the types of models it can run. Ultimately, the open ML core should, in theory, run anything, depending on the size of the model and the capabilities of the system. You’ll still run everything, but if the model is supported in our engine, it’ll be much more efficient.
From the customer point of view, are you providing a set of pre-tested models or even building samples yourself?
Mannion: All of the above. With an open core and open-source tooling, we expect an ecosystem to evolve around it. Both Google and we are interested in supporting that. For Google, this is a way to have their technology reach a broader range of products beyond the standard Google offerings.

Can you describe the core? What is it based on?
Mannion: It’s RISC-V-based. It’s their subsystem, and we collaborate by adding our enhancements. It’s their architecture, but we differentiate by optimizing its performance on our SoC. Think of it like any open standard where companies differentiate by how they implement it.
What else is going on for Synaptics?
Mannion: We’ve made a lot of progress with ecosystem partners. For example, one demo we have is a higher-end processor running a small language model. If this processor were in an appliance, you wouldn’t need a manual. You could just talk to it, ask about error codes, and it would respond directly without needing cloud connectivity. (Editor’s note: We were then shown a demo of this working.)
It’s tailored to specific products. You don’t need a manual. You can extract the Q&A from a manual, turn it into a model, and run it directly on the device. It’s completely on-device, so it doesn’t hallucinate or require constant Internet access. If it can’t answer, it can default to suggesting a Google search.
That sounds like a practical edge use case. Are there broader AI implications for the edge?
Mannion: Absolutely. It helps reduce the digital divide. People don’t have to fiddle with phones or cloud connections; it just works. Beyond privacy and security benefits, edge computing reduces cloud costs and improves efficiency. We also see hybrid applications emerging, where enterprises want to move some workloads to the edge to reduce reliance on the cloud.
And how do you see the market for edge AI developing?
Mannion: The market is evolving. Besides established applications and AI attach rates, people are future-proofing their products. They might not have a use case immediately but want the capability available. There are also new applications, like body-worn electronics, that weren’t previously captured in reports.
Is the focus more on consumer or enterprise?
Mannion: It’s a mix. Consumers will likely drive adoption, especially in wearables and home automation. But enterprise use cases, like defense and law enforcement, are also emerging. AI models that interact with the real world, rather than just predicting text, will drive intelligence forward.
How are you handling security for connected devices?
Mannion: We’re ensuring PSA Level 3 compliance and focusing on secure devices. While these devices will be connected, the models will run locally to reduce latency, energy costs, and dependence on the cloud.
Your philosophy seems to be that processing should happen at the edge. Do you see resistance to that?
Mannion: Not really. IoT devices already have processors. It’s just about making them more intelligent. Devices will naturally evolve to become smarter over time, whether for better functionality, user experience, or robustness between the cloud and edge.
How is the business performing?
Mannion: We launched Astra at Nuremberg [Germany] in April, and in the past eight months, we’ve seen solid traction across a range of applications. Consumer-focused areas like home automation, wearables, and appliances are doing well. Industrial applications are promising too, but we need to broaden our scope and scale our efforts. The technology is there; it’s about reaching the market and uncovering opportunities. The company has about 1,500 employees overall, with around 200 focused on processors. It’s a nimble group, but we aim to scale efficiently.
This is your first touch on RISC-V, right? Your processors have previously been Arm-based?
Mannion: For this AI platform, the core is RISC-V-based, but our processors are still Arm-based. It’s about choosing the right core for the right application.
LIKE WHAT YOU’RE READING? INTRODUCE US TO YOUR FRIENDS AND COLLEAGUES.