Blog

On seeing more

Peddie’s second law is—The more you can see the more you can do. Looking good, feeling better. And if you’ve looked at my blog, you will see what I’m currently experimenting with to test that theory. And a test it will be, when you have a video wall in front of you, your operational dynamics change. If all the displays ...

Robert Dow

Peddie’s second law is—The more you can see the more you can do. Looking good, feeling better.

And if you’ve looked at my blog, you will see what I’m currently experimenting with to test that theory. And a test it will be, when you have a video wall in front of you, your operational dynamics change. If all the displays are filled with individual applications, then you want the screens about 60cm or two feet or less away from your eye. If you are using all six in an extended desktop mode then you want them a meter or three feet or more away. That’s something that is not discussed much when setting up multi-monitor systems. Put another way, if you could have single screen that was sixty-inches wide and twenty-four inches tall with resolution of 5760 x 2160 where would you place it?

Six Monitor Display - Jon Peddie Research

So the “lean-forward, lean-backward” metaphor has new meaning—for movies and games you’ll want to lean backward, but for work, you’ll want to lean forward. This suggests a wall mount with a pivoting arm, or a bodacious stand with such an arm. And that suggests new market opportunities for peripheral sellers. EVGA solved the problem nicely with their swing dual monitor setup they call the Interview, but that may not scale easily to six monitors.

The more you can see the more you can do … Indeed. Suppose you could see infinite detail in your image? Have you ever played with fractals?

They are confounding in that they subdivide forever—you can drill in and in and in and never hit the end. But they are synthetic procedural images and although delightful to look at don’t serve any practical use. But what if you could just see more of an important image, a computer generated image used for maybe a movie or a game?

A major Holy Grail in computer graphics is the creation of a picture that is absolutely indistinguishable from real life. And some folks will tell you we’ve done that with the advanced ray-tracing techniques that have been developed. However, ray-tracing is only successful (in a reasonable amount of time) to generate nice solid objects like tables, cars, and buildings. Ray-tracing is literally like watching paint dry if you try to render an organic scene of say a forest or herd of sheep on a dusty road.

Advanced lighting techniques like those developed by Luxology and StudioGPU can create very realistic images of organic scenes and solid objects, but they are not mathematically accurate. One of our friends at a CG company says “who cares? Does it look realistic; is it pleasing to the eye? Are the Dutch masters mathematically accurate?” Totally valid points.

And speaking of points, that conveniently leads me to voxels and point-clouds. Voxels can theoretically give us the ultimate in a realistic and mathematically correct image of any scene, organic or object, and in our lifetime. And for the ultimate generation of a mathematically correct image you can scan a scene, a building or car with a laser, produce a point cloud, and then render it. Now you are challenged by the number of points or voxels you can generate or scan. In the latest issue of Tech Watch you can read about Ultimate Detail, an upstart little company in Australia, who thinks they’ve come up with a solution for managing voxels in a timely manner to give you, well, ultimate detail.

The problem with voxels and points and ray-tracing, and CG is subdivision. A pixel, especially a screen pixel, is woefully inadequate for representing the real world. The real world is much finer grained than 100 DPI or 100,000 DPI. So if you construct an image through any of the previously mentioned methods you have to be able to subdivide the final pixels, the ones that will get thrown at your cones and rods. Or …

You can add more pixels, and that takes us back to the giant screen sitting in our lab right now. It’s still limited to ~ 100 DPI, it just has more “I’s”—inches. But magic happens. It’s called subtended arc and it’s why our mobile screens look as good as they do when they are only 160 DPI. So even though the DPI is limited, more pixels are filling in the subtended arc of our viewing position—putting more pixels right in front of our eyes—and our clever brains translate this into a “better” picture with a pseudo higher DPI—so we are not only seeing more, we think we’re seeing better too.

And as my CG friends said, if it looks good who cares? Remember darling, it’s not how you feel that counts, it’s how you look.