News

Will AI completely take over computer graphics?

Quite probably.

Jon Peddie

More than 115 AI processor companies, backed by $13 billion in venture capital, are developing unique chips, but only a few will likely survive long term. AI is transforming industries like computer graphics, creating job shifts while automating repetitive tasks. Fast processors enable realistic AI-driven characters, freeing creators to focus on design. The entire computing stack—from chips to applications—is now influenced by AI, which accelerates development, enhances realism, and redefines creative work, blending technology with imagination in new and disruptive ways.

In JPR’s ongoing AI research, we have identified over 115 firms that are building or planning to develop unique AI processors. Those companies employ over 40,000 people and have secured venture capital funding (excluding publicly traded companies) exceeding $13 billion. That’s a huge influx of highly skilled individuals supported by a considerable amount of money, poised to make a significant impact.

AI jobs

So, first off, let’s say that AI is generating thousands of jobs and creating new industries. 

Today and every day, we hear about new ways AI is being used in the computer graphics industry, including entertainment content creation, game development, media content creation, defense, safety, and nearly all sectors of content creation. In fact, AI is fundamentally transforming these industries. 

Coders, artists, and writers are nervous because they worry about AI taking their jobs. They are right—AI is already replacing some jobs. It will take roles that are robotic and don’t require creativity or imagination, and it should handle those mind-numbing, dehumanizing tasks. On the other hand, there’s an endless demand for imaginative and creative people who can develop tools and processors to support frontline creators. We can’t fool ourselves into thinking they are the same people. 

There is hardship ahead as people face layoffs. Some will retrain, while others will start new ventures. This is a normal yet still sad and painful part of disruption, whether it involves the introduction of electricity, Moore’s Law, or the Internet. Change often brings chaos and difficulties—you can’t just dismiss that and say, “Oh well.”

Perhaps some clever individuals could devise ways to utilize AI to facilitate the transition, including the development of AI retraining and the reallocation of resources; that would be truly disruptive. 

In the meantime, and for the foreseeable future, AI will continue to be utilized in CG across all areas, from airplane design to games, movies, medical diagnostics, and robotics. 

The application of AI to various tasks in the pipelines of industries that employ CG capabilities will produce better and faster results, continuing to surprise and amaze us. As futurist Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” The deepfakes, exoplanet discoveries, digital humans, and tumor findings were inconceivable five or 10 years ago—they would have been considered magic if demonstrated at that time. 

Video game developers across Asia have rapidly integrated AI into their characters, aiming to create more natural interactions and foster deeper player engagement. Characters react in real time to unfolding events, guided by their individual personalities, resulting in more personalized and engaging experiences. Non-player characters (NPCs), whose actions and dialog were once scripted in advance, can now be powered by AI for more adaptive behavior and responses.

However, all the examples and applications share a common construct: a process stack that begins with the processor and its memory, proceeds to an API or driver, then an OS, a network, and an application that utilizes a toolset behind it. Whether it’s an XR headset, a smartwatch, a car, a missile, or a character in a movie, they all share a similar stack construct. And the irony is that AI touches every element in the stack. AI is used to design and test the processors, to create and test the driver or API, the OS, and, most certainly the application and its tools, like compilers and datasets.  

Looking at just the processor part of the stack, the work being done by the heterogeneous SoCs with CPUs, GPUs, NPUs, TPUs, and memory processor units (MPUs) is astounding in every aspect of the devices. They are built with subatomic portions of semiconductor material that run blazingly fast at room temperature, and the many engines or co-processors in them operate asynchronously, simultaneously, and yet manage to deliver a result in nanoseconds. An analogy would be getting on a bus or a subway at the north end of town and arriving at your destination at the southeast corner in less time than it took for the doors to close—and do that ten thousand million times a second. It’s the amazing speed of the processors and memory that makes it possible for AI to work, examining all the data, test vectors, and forming summaries. 

All the tens of thousands of lines of code, in various formats and structures that need to be processed to make a character’s face form a smile, have the correct lighting, physical movement, and a dozen other features and functions, require a blazing fast processor so that character’s face movements happen in a realistic time frame. AI is used to ensure the facial movements are believable. It does that by digesting millions of examples and summarizing them. And that’s just for a smile. Extrapolate the workload to a crowd that is running through the street, jumping, singing, fighting, and smiling. And all that is being done on some processors somewhere.

Here is the sparkling nugget of hope for us humans. So far, computers have not been particularly brilliant. They can paint a picture according to prompts fed to it by people who are working with some kind of vision in their head, but I can guarantee that the work the computer spits out falls far short of the image in the creator’s head. The good creator takes the best result and hones it and makes it better, and fits it in with all the other ideas coming in for a particular work. It’s not all that different from what we do now; it just happens a little further down the road toward completion.

And what about the bad creator? Or, more accurately, the mediocre creator? They let the AI travel even further down the road to creation and intervene less. The result is an uninteresting mush of collected data that looks like every other piece of work generated by a bunch of computers. Its sole purpose is to fill a hole where creative work should go. 

AI won’t take all jobs, but creative content will require fewer creatives, and they’ll need more skills. Same as it ever was. The more interesting question is what kinds of new media might we expect in the future. 

Magic. CG has always been magic. It always will be, and now it has a new assistant that makes the magic even easier. Now, instead of spending hours writing scripts of code and debugging them, you can simply type in: character smiles and runs down the street – replication with randomness 100 times – return. The AI engines handle all interpretation, code generation, testing, and execution, as well as screen painting, frame generation, and, of course, lighting and coloring. Magic? Damn right it is. Job killer? For some, yes, but who taught the AI what running was, what a smile is, and how to light it all? That’s a new job, a different job. Maybe it’s your job. 

YOU LIKE THIS KIND OF STUFF? WE HAVE LOTS MORE. TELL YOUR FRIENDS, WE LOVE MEETING NEW PEOPLE.