What’s in a name?
The Holy Grail in computer graphics is the suspension of disbelief—to tell such a convincing story with pixels that the viewer not only totally believes it, but thinks he or she is in it, a participant, voluntary or not.
We’ve had such experiences in the cinema for a long time. A story is told, and we become so engrossed with it that when there is a shocking moment like the Alien popping out of an unexpected place, or a FedEx airplane falls apart, we duck, scream, or worse. The images stay with us for decades like the shower scene in Psycho, or the little boy grabbed by the evil clown in Poltergeist. But we have been just as moved with romantic scenes, and heroic scenes. Why? Because we accepted the story, and the imagery—we bought into it.
We crossed the line a decade ago on computer-generated special effects, a thing that used to be called CGI— computer-generated imagery—and now is simply known as GC. Crossed it when we could no longer distinguish the real from the CG, when actors were augmented by CG. When we crossed that line, that point of suspension of disbelief, we quickly became accustomed to seeing anything happen, and even if it was a cartoon like a house floating up with balloons, we became emotionally involved and worried about the characters.
The same is true in games, the only difference being the use of our peripheral vision because most of us play games on a small (less than 30-inch) low-res (HD or less) screen. But that is changing. Console game players can use the 60-inch HD screens that are rapidly being replaced by 4K screens, and PC gamers are getting 30-inch or larger 4K screens.
The magic in the movies was all about cinematography and story. The projection and sound systems have evolved to where we expect to go to a theater that has a 4K Barco, Christie, or Sony 33K lumen projector, and at least a seven-channel Dolby if not an Atmos sound system.
PC and console games can match the screen resolution, but not yet the sound; and sound contributes so much to the feel of the movie or game, to its believability.
When we crossed the line in the cinema, did we enter a class of artificial or virtual reality? Should those terms be limited to computer games, goggles, glasses, and CAVEs? If we accept it, dismiss our disbelief, look into the movie and not at it, haven’t we crossed into an alternate reality? Is it artificial or virtual? Do we need an op¬erator, a prefix to distinguish what we are experiencing such as cinematic AR, or game VR, or portable AR? Or is that just too pedantic and boring? Some of us at JPR have resorted to the term Digital Reality.
Remember the old expression (now a cliché), You’ll know it [art] when you see it? I paraphrased that for CG and said, I’ll see it when I believe it. I totally believe it in the movies now. Here’s a nice test/exercise. This year’s Best of Show winner at Siggraph is Bot & Dolly’s projection demonstration (part of Intel’s Creators Project). If you can, zoom to full screen, turn off the room lights, and let yourself believe it.
Once we have experienced the surreal special effects of a modern movie, we are tainted and spoiled and we can’t go back and get the same thrill and enjoyment we once did from low-res, fuzzy stereo sound movies. I’m not just referring to the ridiculous Transformers, Superman, or Super Heroes movies that bash the hell out of New York (poor New York, it seems to get trashed in every movie). Think about the scenes from 300—do you really think there are that many supermen? But while you were watching it, you totally accepted those amazing bodies. CG is in just about every movie now. Now, we have the phenomenon of movies that try to convince us they’re made without CG even when they are like The Grand Budapest Hotel. On the opposite end of the scale, a movie that revels in its CG is Guardians of the Universe, which is my favorite movie so far this year. There is just no turning back for me.