Last week I gave some examples of wearable technology. This week the focus goes beyond wearables to other ways of interacting with cyberspace. And, as I said last week, some or all of these products may never become commercially viable, but they give you an idea of where things are headed.
The folks at Ubi announced two months ago that they have left the crowdfunding and development stage and released their “ubiquitous computer” (at least from within eight feet). There’s no display screen, no mouse, no keyboard – you only interact with Ubi via your voice and its voice.
Sweden’s ShortCutLabs have designed the Flic button to be used as an all-purpose remote control for all the things in your house that could be controlled remotely. That includes your smart phone, your lights, and even somehow ordering a pizza. See their Indiegogo video at http://youtu.be/MDsjBh2xOgQ
Of course, if you want a really universal, but physical, remote control, then you’ll have to depend on your hand. With that in mind, Onecue proposes that you control “your media and smart home devices using simple touch-free gestures” of your hand. Their pre-order video is at http://youtu.be/_1QnnRn47r0
From Berlin, Senic have offered up Flow as a more general replacement for the computer mouse, which is also based on gestures. What’s intriguing about their work is the fact that they have worked on dozens of interfaces to various products and it is open for use by other developers. Their Indiegogo video is at http://vimeo.com/112589339
Three months ago, University of Washington researchers demonstrated how hand gestures could control your smart phone.
And, even as driverless cars are being perfected, there is still interest in enhancing the blended virtual and physical experience of humans driving cars. For example, Visteon, a long time supplier of “cockpit electronics” to the auto industry, recently announced its development of the HMeye Cockpit, which it describes as:
“an automotive cockpit concept demonstrating how drivers can select certain controls through eye movement and head direction. Hidden eye-tracking cameras capture this data to deliver an advanced human-machine interaction (HMI).”
Intel has been working more generally on smart cameras with depth sensing. Its RealSense technology will start to have various applications early this year, some of which they showed off at the CES show last week, as reported by the Verge.
Haptics – touching and feeling your connection with technology – is one of the newer frontiers of user interface research.
From the Shinoda-Makino lab at the University of Tokyo comes HaptoMime, a “Mid-air Haptic Virtual Touch Panel” that gives tactile feedback. Using ultrasound, it gives the user the sense of interacting with a floating holographic-like image. You can read more in New Scientist and see the lab’s video at https://www.youtube.com/watch?v=uARGRlpCWg8
Finally, a few weeks ago, computer scientists at the University of Bristol announced their latest advance in enhancing the real world:
“Technology has changed rapidly over the last few years with touch feedback, known as haptics, being used in entertainment, rehabilitation and even surgical training. New research, using ultrasound, has developed a virtual 3D haptic shape that can be seen and felt.”
You can see their demonstration at http://youtu.be/4O94zKHSgMU
These same scientists two months ago also announced a clever use of mirrors:
“In a museum, people in front of a cabinet would see the reflection of their fingers inside the cabinet overlapping the exact same point behind the glass. If this glass is at the front of a museum cabinet, every visitor would see the exhibits their reflection is touching and pop-up windows could show additional information about the pieces being touched.”
The mouse and keyboard are so last century!
© 2015 Norman Jacknis