TechWatch

LG AI Research partners with FuriosaAI 

Promises 2.25× higher LLM inference performance compared to GPU-based systems.

Jon Peddie
Furiosa

FuriosaAI reports that its RNGD accelerator, built on the Tensor Contraction Processor architecture, passed LG AI Research’s performance tests using EXAONE models for LLM inference. Following validation, the companies partnered to deploy RNGD Servers—each integrating eight accelerators in a 4U chassis—to LG’s electronics, finance, telecom, and biotech units. LG confirmed RNGD met performance targets, improved energy efficiency by 2.25× over GPUs, and achieved 60 tokens/second on EXAONE 3.5 (32B, 4K). Both firms continue optimizing hardware, software, and infrastructure to scale advanced AI deployments, including ChatEXAONE services. FuriosaAI states their RNGD accelerator (which they pronounce as “Renegade”) successfully passed LG AI Research’s
...

Enjoy full access with a TechWatch subscription!

TechWatch is the front line of JPR information gathering service, comprising current stories of interest to the graphics industry spanning the core areas of graphics hardware and software, workstations, gaming, and design.

A subscription to TechWatch includes 4 hours of consulting time to be used over the course of the subscription.

Already a subscriber? Login below

This content is restricted

Subscribe to TechWatch