9 outwits mechanical beast. (Source: Shane Acker)
Siggraph was, as always, a blast. Although it was
smaller in booth size, the crowds felt larger, and they were for sure
exciting. This was the 32nd event and my 27th. About 29,000 people came
this year (27,800 last year) and there were 230 exhibitors (229 last
SGI gets Jon’s award for biggest display. We were
glad to see SGI back at the show, and the company had plenty to show—by
plenty I mean plenty of data. SGI is now the data set king; you can’t
find a data set too big for one of their machines and to prove that,
they were looking at details of a mummy right down to the parts decency
normally says you shouldn’t look at, and showing every nut, clip, and
rivet of the 777, 32 Gpolys—that’s a heap of polys.
Mummified remains of the ancient Egyptian child.
The other extreme I found this year—wait, before
I tell you, I have to tell you about Jeff Brown, Nvidia’s WS boss. He
saw me staring at a display with a confused look and said, “Hi.
What’s the one question you hate people to ask you at a show?”
It took me a second, and then I remembered: I hate people to ask me,
“What’s the best/newest/whatever thing you saw?” I hate it
because I get so loaded down in meetings at Siggraph I barely have time
to see the floor, and as I’m collecting data I haven’t had a chance
to form an opinion, which is when someone one usually asks meÉ. But,
I had an answer for Jeff this time: facial expression and natural phenomena.
There were two prime examples of facial animation.
One was Avid’s Softimage Face Robot giga-poly core engine that employs
a rigid-body dynamic system with a physics engine to animate as much
of a face as you’d like (Figure 2). Softimage paid particular attention
to the mouth and its edges and the eyes. The results were spectacular,
but it relied on motion capure to get the initial control points and
Rockfalcon is not having a good day (left). Rockfalcon’s
Which leads me to the other amazing thing I saw:
Image Metrics’s camera capture system that doesn’t need mocap or control
dots stuck on the actor’s face. It does it all with clever image processing.
Go to the company’s website and click on the faces (http://www.image-metrics.com/index.htm).
Image Metric’s camera capture system: no mocap
The company was founded in 2000 by a team of PhD
computer vis-ion scientists from the University of Manchester in the
U.K., and their technology has been used in medial applications and
in movies (Polar Express) and games (“Grand Theft Auto:
San Andreas” and “The Getaway: Black Monday”).
The high points at Siggraph for me are the emerging
technology section, the films, the art gallery, the exhibit floor, and
the meetings, in that order. Oh, and the serendipity of bumping into
old friends. Someone told me once (either Jim Blinn or Turner Whitted)
that it was the gathering of the clan. Often I met them at the Siggraph
Pioneers’ dinner (aka Old Farts Club).
Chiba University Japan had a cute color enhancement
setup that could change your complexion to make you look happy, angry,
sunburned, or drunk (see Figure 4, next page). It looks like it’s targeted
at games, and given the location of the school, we can probably expect
to see this in Sony’s EyeToy soon.
Art imitating life? There was an animated networked
model of a water resources allocation algorithm inspired by the water
crisis in the western U.S. from Arizona University (see Figure 5, above).
As the water pumped through the various stages it
make cymbals ring, air tubes honk, and other clattering, banging, but
oddly rhythmic noises. The artist, David Birchfield, said it also served
as a metaphor for the distribution of capital and cultural resources
throughout our communities. You can’t find this kind of stuff just anywhere,
Nvidia had their own show, a dome that had a hi-res
Christy digital projector (designed and engineered by Elumenati using
projection technology from Elumens) that was sitting on the floor shooting
straight up with a fish-eye lens, resulting in a damn fine planetarium
display. In it they were showing an interactive real-time and rendered
show about the visible universe from the Hayden Planetarium that ranged
from the smallest molecules to the largest galaxies in the universe.
It was being driven by an Opteron-based HP xw9300 workstation. I tried
taking pictures of it but they didn’t turn out.
I was a little disappointed at the Digital Theater
this year—nothing really great like Gerry’s World, Giacometti,
or Bingo the Clown, although I did like Shane Acker’s film 9.
And evidently I wasn’t alone because it won best
of show. The story features hapless rag dolls who are the prey of a
mechanical beast. The movie tracks two characters, “5” and
“9,” as they scavenge the ruins of their world and attempt
to survive the ravages of the beast. After witnessing the death of his
mentor “5,” the rag doll “9” must call on all its
courage and confront this vile creature alone. Only through cunning
and the use of his primitive technology can “9” hope to destroy
the monster and steal the talisman of trapped souls it carries as a
trophy. You can get a copy of a great poster or see the trailer at http://www.shaneacker.com/.
chosen this little fellow as my Yahoo IM avatar.
In Emerging Technologies there was virtual clay
that consisted of 16n balls that could be squeezed and would stay deformed
to create the clay model you constructed. Then they were refilled with
Not exactly sure what practical purpose it served
or what it has to do with pixels, but it was a definite crowd pleaser.