News

UNISOC and Imagination strategic cooperation on AI

Imagination Technologies announced that the fabless semiconductor company UNISOC has licensed Imagination’s latest generation of neural-network accelerator (NNA)

Jon Peddie

Imagination Technologies announced that the fabless semiconductor company UNISOC has licensed Imagination’s latest generation of neural-network accelerator (NNA), IMG Series3NX for use in future SoCs targeting mid-high range mobile devices, TV, and other markets.

UNISOC previously integrated Imagination’s Series2NX AI core into its mobile platform. The combination of Imagination’s NNA technology and UNISOC’s chip and system designs capabilities has already resulted in benchmark leading performance.

The company says the IMG Series3NX is the fastest, power-efficient embedded solution for hardware acceleration of neural networks in the market. Building on the success of its multi predecessor, Series3NX provided a level of scalability enabling SoC manufacturers to optimize compute power and performance across a range of embedded markets such as automotive, mobile, smart surveillance and IoT edge devices. Thanks to architectural enhancements, including lossless weight compression, the Series3NX architecture benefits from a 40% boost in performance in the same silicon area over the previous generation, giving SoC manufacturers a nearly 60% improvement in performance efficiency and a 35% reduction in bandwidth.

Imagination says the PowerVR Series3NX represents an entirely new type of architecture, designed from the ground-up to provide hardware acceleration for the next generation of artificial intelligence within embedded platforms. Its highly tuned tensor processing and convolution engines combined with an optimized core memory infrastructure can deliver outstanding maximum performance. Delivering over 4.1 Tera Operations Per Second (TOPS), the Series3NX offers the highest inference/mm2 available on the market, enabling the creation of compact and cost-effective inferencing solutions.

PowerVR Series3NX, says the company, is designed with scalability in mind, licensable in IP cores that hit different performance points and feature sets to address multiple markets and applications.

As a fully flexible solution, the Series2NX supports neural network bit depths from 16 down to 4-bit, reducing bandwidth and increasing performance without compromising inference accuracy. The processor can run at 8-bit: 2048 MACs/clk, or 16-bit: 1024 MACs/clk.

Along with the industry’s highest inference/mW, the Series3NX, says Imagination, delivers class-leading neural network acceleration with the lowest power consumption.

 

What do we think?

AI is on the edge. It’s one thing to throw massive processors in the cloud at AI training, but when it comes time to use an edge device like a smartphone, notebook, or embedded system, you need speed and low power consumption to distinguish those cats from dogs and turtles. 

It’s also one thing to design a CNN tensor processor and another to actually implement it in silicon, fire it up and recognizes faces, wine bottle labels, and license plates whizzing by at 80 Kmph.

AI inferencing is a big market, gigantic and on-going. Once you train your AI to distinguish an English Springer Spaniel from a Labrador Retriever, and when you’ve done it, you’re done with the training hardware. But if you’re a dog fancier, you’re never done, so AI inferencing goes one forever.

Qualcomm and 5G