Ever since I can remember, we have been striving to design a machine that will make life easier and help us communicate better. Word processors are better than typewriters. Email is better than physical mail. The web is better than – well everything.
Today we are looking at what Intel calls Perceptual Computing. We will talk to, wave our hands at, stroke, punch, and tap at our machines. And why? So we can capture our thoughts as fast as we get them. And why do we want to do that? Because our thoughts—especially MY thoughts—are freakin brilliant, that’s why: and you need to hear them, the sooner the better. But how can I get my brilliant thoughts to you? We don’t talk on the phone anymore. My brilliant thoughts are more than 140 characters long – way much longer – way. So I need to get them in printed form, ASAP, so you can read them, and benefit from them before you go and do something stupid—like read someone else’s almost brilliant thoughts.
Worse, we’re putting all this effort into creating words at the speed of thought—but nobody reads any more. All we do now is watch YouTube videos, sports events, Downton Abby, and go to the movies. If we read anything (by the way, are you still reading this?) if we read anything our national, maybe worldwide attention deficit-hyperactivity disorder (ADHD) kicks in causing us to look away after five to seven words, and then forget the first three or four. It would make considerably more sense if we could automatically turning our ideas into drawings. Suppose you could think of an intricate mechanical part and somehow, by talking or waving, create a CAD drawing. As long as there’s no long description associated with it, that might be a way to convey information.
Are we going back to the future when humans communicated with pictograms. If a picture is worth about a 1000 of words, then that would indeed be effective and efficient. Of course the hard part is, and always has been, agreeing on the symbols – try getting an Arab to understand simple Chinese, or either of them to understand Mayan or Egyptian. But with computers it might be possible to avoid a new tower of babble. Might be, except our computers are already towers of babble –Apples can’t talk to PCs which can’t talk to Androids, who have no idea what a Linux server is saying. And if that isn’t amusing enough for you, put a lawyer in a room with a bunch of surgeons and have them explain airfoil dynamics to you. In fact our communications are so bad, most of the time that none of these people or machines or programs even recognize what the other is trying to communicate.
So now we’re going to expand our vocabulary of not understanding with gestures one machine won’t recognize and that I had to learn from another machine.
My beloved Dragon speech recognition doesn’t recognize me when I have a cold—and I’ve been talking to this damn program for 15 years. I once used it on a phone call to see if I could capture notes better than I could write them—now THAT was funny. And speaking of writing them, I have pretty decent handwriting, and I can read that of most other people, but my kids and some of our employees can’t read cursive. Someone asked the other day—did we really have to learn the multiplication table and how to write cursive? Today we have computers that write in fake cursive to make a document look like it’s from a real person. How are those direct mail tools going work if folks can’t even read cursive. It’s an example of solving wrong problem, we need a writing machine for people who can’t read.
In some of Asimov’s early stories, his future solved this by having reading machines—we have that today they’re called audiobooks. So the machine we should be designing is one that connects our thoughts to a video and/or a MP3 player. Now we’re getting close to the Vulcan mind meld. How about a mind meld that connects via Bluetooth?