I remember back in 2001 when Nvidia changed gaming forever with the GeForce 3—the world’s first GPU with programmable shaders. Unveiled dramatically at Macworld Tokyo by Steve Jobs and Nvidia’s David Kirk, it replaced rigid fixed-function pipelines with something revolutionary: a chip you could actually instruct how to render. Carmack debuted Doom 3 on it. Morrowind‘s water shimmered because of it. Max Payne‘s bullet time dazzled because of it. Twenty-five years later, every GPU on the planet still runs on the foundation that little NV20 chip laid down in Tokyo.

David Kirk in Tokyo.
I remember back in 2001, when Nvidia surprised us with a processing pipeline graphics controller they called a GPU—a graphics processing unit—and nothing in PC gaming was ever quite the same again.
It happened at Macworld Tokyo, of all places. Steve Jobs took the stage and introduced David Kirk, Nvidia’s chief scientist and vice president of architecture, and what followed was one of those rare moments where you could actually feel an industry pivoting in real time. Kirk walked the audience through something called the nFinite FX engine, and the words “programmable shaders” entered our vocabulary for the first time. Before that day, graphics chips worked through fixed-function pipelines—rigid, predetermined, inflexible. What Kirk was describing was something fundamentally different: a chip that could be told how to render, not just what to render.
Then he showed us Pixar’s Luxo Jr. demo. That little desk lamp and its tiny companion, rendered with soft shadows and warm, natural illumination, right there on a PC. The audience went quiet in the way audiences do when they realize they’re watching something they’ll be describing to people years later.
And then John Carmack walked out.
The id Software co-founder gave us our first look at Doom 3, running a unified, real-time, per-pixel lighting engine—the first of its kind—powered entirely by GeForce 3 hardware. The darkness in those corridors felt different. Alive, somehow. We’d never seen anything like it on a consumer machine.
The AIB itself was remarkable by any measure. The NV20 processor was built on a 150 nm process, packed 57 million transistors, and ran 64 MB of DDR memory on a 128-bit interface. It supported Vertex Shader 1.1 and Pixel Shader 1.1—specifications that would define the next generation of game development and give developers tools they’d been dreaming about for years.
And they used those tools. Bethesda’s Elder Scrolls III: Morrowind shipped with real-time water effects that made the marshes of Vvardenfell feel genuinely alive. Max Payne gave us bullet time with high-fidelity textures and real-time reflections that made every slow-motion dive through a doorway feel cinematic. AquaNox took us underwater with complex lighting that no fixed-function pipeline could have conjured.
Twenty-five years later, every shader running on every GPU in the world—from a college student’s gaming rig to the data centers training AI models—traces a direct line back to that stage in Tokyo.
I was there for it, in the way that enthusiasts are “there” for things—reading every report, watching every demo clip over a dial-up connection. It felt important then. It feels even more important now.
Looking back now, on its 25th anniversary, I see the GeForce 3 as more than a product launch. It marked the moment when the GPU became programmable in a way that reshaped the trajectory of graphics. The fixed-function era began to fade, and modern 3D pipelines took root. Today’s rendering techniques—physically based shading, advanced global illumination, real-time ray tracing—trace their lineage back to that inflection point in 2001, when the term “GPU” started to mean something new.
I have a list of books written by friends and me on the history of the GPU, from 3dfx to today. If you’re interested, drop me a note at [email protected] and I’ll share it with you.
LIKE WHAT YOU’RE READING? INTRODUCE US TO YOUR FRIENDS AND COLLEAGUES.