News

Arm and Nvidia spot a squirrel

Breathless reporting makes news of Nvidia's and Arm's collaboration on Xavier sound like earth shattering news.  Nvidia and Arm recently announced that they would be working together to make Nvidia’s Deep learning Accelerator (NVDLA) accessible to programmers using Arm’s Trillium platform. Project Trillium was announced by Arm in February this year as a way for developers to get direct access from ...

Peter McGuinness

Breathless reporting makes news of Nvidia's and Arm's collaboration on Xavier sound like earth shattering news. 

Nvidia and Arm recently announced that they would be working together to make Nvidia’s Deep learning Accelerator (NVDLA) accessible to programmers using Arm’s Trillium platform.

Project Trillium was announced by Arm in February this year as a way for developers to get direct access from their applications to Arm’s machine learning IP offerings. The announcement made it clear that these include the inference acceleration and other compute libraries available for the CPU and the GPU as well as Arm’s existing object detection engine and it’s not-yet-released machine learning processor. At the same time, the Trillium announcement included third party software and hardware IP in the mix, as shown in the diagram they supplied at the time.

 

Nvidia’s DLA is included on their Xavier chip as part of Nvida’s automotive-focused “Drive” line of products. Nvidia also made a version of the DLA available as an open source project (you can find it on github), giving anyone who is interested the ability to take the Verilog description and make their own product out of it. At the time of the announcement at last year’s GTC conference, CEO Jen-Hsun Huang claimed that the move would make it easy for hundreds of chip companies to integrate accelerated deep learning into their products and that Nvidia would provide everything they needed to develop applications using Nvidia’s development kits, including access to TensorRT, Nvidia’s optimization toolkit and inference runtime engine.

At last week’s announcement, Arm seemed to support that vision, saying: “The collaboration will make it simple for IoT chip companies to integrate AI into their designs and help put intelligent, affordable products into the hands of billions of consumers worldwide.”

As of now, the only known implementation of NVDLA is on Nvidia’s Xavier but the intention of both Arm and Nvidia, based on their announcements so far, seems to be to promote the architecture as a de-facto standard for network inferencing hardware accelerators.

What do we think?

Last week’s announcement caused a little bit of confusion, with some commentators wrongly assuming that Arm’s hardware accelerator, announced as part of project Trillium, would in fact be the NVDLA. We quickly confirmed with Arm that this is not, in fact, the case. Arm restated that their intention is to introduce a fully home-grown IP core for deep learning inferencing sometime later this year.   The collaboration itself makes perfect sense for both Arm and Nvidia but it’s important not to be swayed by the breathless claims that this will put ML acceleration in the hands of billions of users (Nvidia’s marketing department must be an exhausting place to work, where everything is “unprecedented” and “supercharged.” Arm seems to have caught bug in this case and as a result, the joint PR reads like a fifteen-year-old girl’s tweet. I kept looking for an announcement of the new NVOMG accelerator).

In reality, the only place NVDLA exists is on Xavier and there will be few or no applications for that outside automotive. It’s possible that some chip vendors will pick up the open source version but that is highly unlikely as anyone who has tried it will tell you that there is a lot more to producing a successful SoC than grabbing some Verilog from the internet.

So, what exactly is the sense in two companies with very different objectives collaborating on this announcement? With no significant, practical outcome possible from it, the only level at which this works is as a statement of intent: Nvidia is late to realize that owning a path to edge inferencing is important to its strategy to be the machine learning vendor of choice, Arm is late to act on the need for an edge accelerator. In both cases, a splashy announcement of a collaboration that requires nothing of either company is just the thing to fill in that awkward silence while they both get their act together.

That leaves the question of what the real outcome will be: we already know that Arm will deliver an IP but what about Nvidia? Is this a hint that they will be getting into the IoT market? That would be a squirrel worth seeing!

You can read more about this in the current issue of the VPU report.