On being natural
The technology disappears when it’s working
We are advancing quickly into the next generation of computing— aided and enabled, of course, by Moore’s Law and an expanding army of clever programmers who know, or are learning, how to exploit the new, tiny, yet powerful hardware. It truly is a generational revolution.
The revolution was started by the smartphone and its volume production, which rapidly brought down the price of nanoscale sensors and forced the processor makers to learn how to squeeze performance out of their designs while being ultra-conservative in the power they consumed while doing it.
The smartphone gave us the platform, and the parts. And that fueled the imaginations of software developers, resulting in a flood of new, clever, never-thought-of-before applications and associated APIs.
The smartphone also allowed us to look at other ways to communicate with it and computers in general. Prior to the smartphone we were locked into the WIMP paradigm—“Windows, Icons, Menus, Pointer, and the “M” also stood for mouse. Or the “P” implied it. Basically, we had a keyboard and mouse. Experiments were tried with voice recognition and stylus input, but although enticing, they were crude and ultimately disappointing at the time.
The smartphone showed us how to give our computer the finger. We had our first natural UI for a computer, albeit limited but totally empowering and giving us new freedom and productivity.
And then we found we could talk to our computers and phones more effectively, thanks to the unrelenting improvement in processors, memory, minaturization, algorithm development, and AI. Now not only could we give our computer the finger, we could tell it to go to hell; it was a liberating point in time.
Augmented reality, not a new technology, found its way to our smartphones very quickly as the screens got more pixels, the processors behind them got more powerful, and the camera sen¬sors got faster with higher resolution.
By now we had four UIs, and two of them were natural: fingers and voice. We, and our machines, were evolving, and in a positive direction.
Pens, or styluses, have been part of the computer interface paradigm since the 1950s when light pens were used to mark unidentified aircraft on a computer screen. They got better and have been used by designers and artists for several decades. A pen-like stylus brings accuracy and control to the UI—if you can draw with one, then you certainly can write with one. Combined with new AI-based OCR and clever word management from companies like MyScript, we had more capabilities. This year we witnessed the introduction of a new standard to make pens as universal and interchangeable as a mouse or keyboard. Now we had a fifth NUI: natural user interface.
The sixth, and I think best yet, are eye tracking and natural interaction. Eyefluence is the leader in this area and has developed a system that lets the user of (e.g.) AR glasses manipulate data effortlessly by simply looking at it. We have an expanded story on the company and tech¬nology.
Now, it could be argued that to use eye tracking, all you’ve done is substitute one input device (mouse, keyboard) for a set of glasses, and today that would be true. However, I believe AR glasses will be as unobtrusive as sun or corrective glasses are today. One company, LaForge, is developing just such glasses, as are others. So in just a few years, maybe five max, we will have lightweight, normal-looking AR glasses with an eye-tracking interface.
What could be more natural than that, and what could make the computer disappear more conveniently?
- NEW INPUT MODES: Eyefluence, Stylus standards
- CAD NEWS: Autodesk expands Fusion, SolidWorks 2017 version, Siemens Analyst Conference, ODA announces Teigha BIM
- HARDWARE: Seeing like a human
- AR WATCH: Opinions on AR
- VR WATCH
- EDITORIAL: No more passwords?
- TECH INSIDER: March begins
- BACK PAGE: On being natural