START TYPING KEYWORDS TO SEARCH OUR WEBSITE

Looking into things

Posted: 06.03.11

As I was registering at the Dimension3 stereovision conference in Paris last week I did a double take when I looked at a large (42-inch) display next to the counter that was showing a black and white image of a woman and table behind her and man in a chair behind the table. It was a beautiful image with at least 1024-shades of grey and looked like a window into a room—the depth was incredible. It was an Alioscopy multi-view display and it was captivating. I know the Alioscopy people and what they make and so I had to move back and forth to convince myself that it was indeed a lenticular display. Multi-view content on a glasses-free display is clearly the future of signage. And none less than LG agrees—they just placed an order with Alioscopy for 250 units for digital signs to off LG’s new Optimus 3D smartphone. LG’s flagship smartphone will be in European stores this summer. One of the applications included with the phone will be the world’s first augmented reality browser—so you can see why LG would want a super 3D display to display it.

Aside from Alioscopy’s sweet design win, the point is multi-view and signage. Multi-view, the use of eight to a couple of dozen cameras to capture and create the stereo image is the best way to minimize the zoning effects in a lenticular glasses-free display. Pierre Allio, the founder of Alioscopy pioneered glasses-free 3D displays and multiple view camera systems as early as 1986 and the company really has the best you’ll see. But you’re going to see a lot of S3D glasses-free displays, and as good as Alioscopy is they’re not going to get all the market.

The real trick to being successful in multi-view S3D signage is the content creation. Yes you have the technology, and Alioscopy’s stuff will be copied and maybe even improved on, and lots of multi-view images will be taken and displayed. But if they aren’t done right, it’s going to turn off the viewers and discredit the technology. Unfortunately the developers of the camera rigs and displays can’t control that—they are at the mercy of ad agencies and content creators and quality varies according to imagination, tastes, and restricted budgets.

There’s a rule of thumb about S3D images, and it holds true especially so in signage. The pop-out to pop-in ration should never exceed 1:3. That means you see into the display three times further than you see things that appear to come out of the display. And that’s where creativity and taste come in. If all the ad agencies can come up with for an idea is some cheesy jump-out -at-you gimmick, it’s going to be 1950 all over again and all that technology and investment will go down the drain. Hopefully the consumers will boycott the advertisers who revert to such tactics—just say no to bad ads and vote with your dollars.

Seeing clearly – or at all

At the Nvidia Analyst’s conference in Santa Clara in March Jen Hsun Huang asked in exasperation, why can’t we just run a video? He was commenting on the difficulty of getting a video to run seamlessly on any machine. It was almost an aside as he described some of the things GPU Compute could do to make life better. He repeated it as he often does, but still didn’t get any reaction from the audience and so he went on. But it struck me, and I commented about it to him later. I said that Nvidia should do whatever it could to make that happen.

I was at the Dimension3 conference to give a talk on the state of the industry for S3D and had two videos embedded into my PowerPoint presentation. I had problems with one of the videos in the conversion process from YouTube to H.264. After I finally got it working and sent my slides to the conference organizers, I asked them to run the slides in presentation mode and let me know if the videos played. They did and they did. But I was still nervous about them up and until it came time for me to speak. It was a big theater sized room (used for showing big stereovision movies), everyone’s slides were on a MacBook and when I got to the first video (links later) all I got was a black screen. If you’ve never experienced the millisecond of OH S___T standing in front of an auditorium and having your presentation fail, it’s something like walking out of the bathroom with no clothes on to a big surprise party. “Sorry,” the organizer said, “the CODEC is missing.”

I recovered (not my first time at bat) and descried the video as best I could trying not to stammer, went through a couple of more slides and got to the next that had a video in it and, yep, same thing. Same, “sorry, … no CODEC,” explanation, followed by more hand waving and stammering by me.

And while all that was going on I could hear Huang saying—why can’t videos just play?

Why can’t they?

Why do we have to have fifteen CODECs and if we do have to have fifteen, why can’t a GPU or the CPU figure out which one is needed, and run it automatically. Nvidia has a software program they call Optimus that can figure out what program is running and decide if it needs a big GPU or can get by with an integrated GPU. It’s the same kind of thing they and AMD do to sense a program for S3D and other functions. Why can’t they sniff the header of a video and cause the right CODEC to automatically come up? You’d expect the OS to that, but Microsoft has had 15 years since Windows 95 was introduced and they haven’t seemed to be able to figure it out yet. And yeah yeah I know about the licensing issues—someone has to pay a few pennies or maybe even a whole dollar to get some of the CODECS—big deal—just do it will you? I thought Apple would do it, but my experience in Paris proved that not to be possible either. Obviously I have all the CODECS on my machines because the videos ran fine.

Just make it happen will you?