Nvidia Rubin CPX and disaggregated long-context inference
Massive context and the inference dichotomy.
Massive context and the inference dichotomy.
Arc Pro B60 outpaces Nvidia in Llama 8B per-dollar performance.
…And a new Mali at last.
Data center GPUs were up an average of 5% from last quarter.
Workstation benchmarking done right.
Arm introduces NSS to compete with Nvidia’s DLSS.
Accelerating the physical AI transition.
YARC—Yet Another RISC-V Competitor.
SDK 2.1.0 adds support for XeSS frame generation with Xe low-latency support.
Funding fuels rollout of power efficient inference NPC, as investor interest in GPU alternatives accelerates.
Workstation AIB outperformed RTX 4060.
Promises 2.25× higher LLM inference performance compared to GPU-based systems.