Augmented Reality Rising

Last week, I gave a presentation at the Premier CIO Summit in Connecticut on the Future of User Interaction With Technology, especially the combined effects of developments in communicating without a keyboard, augmented reality (AR) and machine learning.  I’ve been interested in this for some time and have written about AR as part of the Wearables movement and what I call EyeTech.

First, it would help to distinguish these digital realities. In virtual reality, a person is placed in a completely virtual world, eyes fully covered by a VR headset – it’s 100% digital immersion. It is ideal for games, space exploration, and movies, among other yet to be created uses.

With augmented reality, there is a digital layer that is added onto the real physical world. People look through a device – a smartphone, special glasses and the like – that still lets them see the real things in front of them.

Some experts make a further distinction by talking about mixed reality in which that digital layer enables people to control things in the physical environment. But again, people can still see and navigate through that physical environment.

When augmented was first made possible, especially with smartphones, there were a variety of interesting but not widespread uses. A good example is the way that some locations could show the history of what happened in a building a long time ago, so-called “thick-mapping”.

There were business cards that could popup an introduction and a variety of ancillary information that can’t fit on a card, as in this video.

There were online catalogs that enabled consumers to see how a product would fit in their homes. These videos from Augment and Ikea are good examples of what’s been done in AR.

A few years later, now, this audience was very interested in learning about and seeing what’s going on with augmented reality. And why not? After a long time under the radar or in the shadow of Virtual Reality hype, there is an acceleration of interest in augmented (and mixed) reality.

Although it was easy to satirize the players in last year’s Pokémon Go craze, that phenomenon brought renewed attention to augmented reality via smart phones.

Just in the last couple of weeks, Mark Zuckerberg at the annual Facebook developers conference stated that he thinks augmented reality is going to have tremendous impact and he wants to build the ecosystem for it. See https://www.nytimes.com/2017/04/18/technology/mark-zuckerberg-sees-augmented-reality-ecosystem-in-facebook.html

As beginning of the article puts it:

“Facebook’s chief executive, Mark Zuckerberg, has long rued the day that Apple and Google beat him to building smartphones, which now underpin many people’s digital lives. Ever since, he has searched for the next frontier of modern computing and how to be a part of it from the start.

“Now, Mr. Zuckerberg is betting he has found it: the real world. On Tuesday, Mr. Zuckerberg introduced what he positioned as the first mainstream augmented reality platform, a way for people to view and digitally manipulate the physical world around them through the lens of their smartphone cameras.”

And shortly before that, an industry group – UI LABS and The Augmented Reality for Enterprise Alliance (AREA) – united to plot the direction and standards for augmented reality, especially now that the applications are taking off inside factories, warehouses and offices, as much as in the consumer market. See http://www.uilabs.org/press/manufacturers-unite-to-shape-the-future-of-augmented-reality/

Of course, HoloLens from Microsoft continues to provide all kinds of fascinating uses of augmented reality as these examples from a medical school or field service show.

Looking a bit further down the road, the trend that will make this all the more impactful for CIOs and other IT leaders is how advances in artificial intelligence (even affective computing), the Internet of Things and analytics will provide a much deeper digital layer that will truly augment reality. This then becomes part of a whole new way of interacting with and benefiting from technology.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

Eye Tech

Last month, I wrote about Head Tech – technology that can be worn on the head and used to control the world about us. Most of those products act as an interface between our brain waves and devices that are managed by computers that are reading our brain waves.

The other related area of Head Tech recognizes the major role of our eyes literally as windows to the world we inhabit.

Google may have officially sidelined its Glass product, but its uses and products like it continue to be developed by a number of companies wanting to demonstrate the potential of the idea in a better way than Google did. There are dozens of examples, but, to start, consider these three.

Carl Zeiss’s Smart Optics subsidiary accomplished the difficult technical task of embedding the display in what looks to everyone like a pair of regular curved eyeglasses. Oh, and they could even be glasses that provide vision correction. Zeiss is continuing to perfect the display while trying to figure out the business challenge of bringing this to market.

image

You can see a video with the leader of the project at https://youtu.be/MSUqt8M0wdo and a report from this year’s CES at https://www.youtube.com/watch?v=YX5GWjKi7fc

Also offering something that looks like regular glasses and is not a fashion no-no is LaForge Optical’s Shima, which has an embedded chip so it can display information from any app on your smartphone. It’s in pre-order now for shipment next year, but you can see what they’re offering in this video. A more popular video provides their take on the history of eye glasses.

While Epson is not striving to devise something fashionable, it is making its augmented reality glasses much lighter. This video shows the new Moverio BT-300 which is scheduled to be released in a few months.

Epson is also tying these glasses to a variety of interesting, mostly non-consumer, applications. Last week at the Interdrone Conference, they announced a partnership with one of the leading drone companies, DJI, to better integrate the visuals coming from the unmanned aerial camera with the glasses.

DAQRI is bringing to market next month an updated version of its Smart Helmet for more dangerous industrial environments, like field engineering.  Because it is so much more than glasses, they can add all sorts of features, like thermal imaging. It is a high end, specialized device, and has a price to match.

At a fraction of that price, Metavision has developed and will release “soon” its second generation augmented reality headset, the Meta 2. Its CEO’s TED talk will give you a good a sense of Metavision’s ambitions with this product.

image

Without a headset, Augmenta has added recognition of gestures to the capabilities of glasses from companies like Epson. For example, you can press on an imaginary dial pad, as this little video demonstrates.

This reminds me a bit of the use of eye tracking from Tobii that I’ve included in presentations for the last couple of years. While Tobii also sells a set of glasses, their emphasis is on tracking where your eyes focus to determine your choices.

One of the nice things about Tobii’s work is that it is not limited to glasses. For example, their EyeX works with laptops as can be seen in this video. This is a natural extension of a gamer’s world.

Which gets us to a good question: even if they’re less geeky looking than Google’s product, why do we need to wear glasses at all? Among other companies and researchers, Sony has an answer for that – smart, technology-embedded contact lenses. But Sony also wants the contact lens to enable you to take photos and videos without any other equipment, as they hope to do with a new patent.

So we have HeadTech and EyeTech (not to mention the much longer established EarTech) and who knows what’s next!

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/150401402048/eye-tech]

Head Tech

The discussion about wearable technology recently has mostly been
about various devices, like watches and bands, that we wear on our
wrists to communicate, measure our health, etc.  But from a
technological perspective, if not yet a commercial viewpoint, these are
old hat.

How about some new hats?  Like this one …

image

These more interesting – and maybe a bit more eerie – wearables are
what I’d call “Head Tech”.  That’s technology that we place on our
heads.

Last year, following along the lines of various universities such as the University of Minnesota, the Portuguese firm Tekever demonstrated
Brainflight which enabled a person to control the flight of a drone
through the thoughts of someone wearing an electroencephalogram (EEG)
skull cap with more than a hundred electrodes.  Here’s the BBC report –
https://www.youtube.com/watch?v=8LuImMOZOo0

This has become a fascination of so many engineers that a few months ago the University of Florida held the first brain-controlled drone race.  Its larger goal was to popularize the use of brain-computer interfaces.

Of
course, anything that gets more popular faces its critics and
satirists.  So one of GE’s more memorable commercials is called
BrainDrone – https://www.youtube.com/watch?v=G0-UjJpRguM

Not to be outdone, a couple of weeks ago, the Human-Oriented Robotics and Control Lab at Arizona State University unveiled
a system to use that approach to control not only a single drone, but a
swarm of drones.  You can see an explanation in this video – https://vimeo.com/173548439

While
drones have their recreational and surveillance uses, they’re only one
example.  Another piece of Head Tech gear comes from Smartstones, working with Emotiv’s less medical-looking EEG.

image

It enables people who are unable to speak to use their minds to communicate.  As they describe it:

“By
pairing our revolutionary sensory communication app :prose with an EEG
headset powered by Emotiv, we are enabling a thought-to-speech solution
that is affordable, accessible and mobile for the first time ever. Users
can record and command up to 24 unique phrases that can be spoken aloud
in any language.”

There’s a very touching video here – https://vimeo.com/163235266

Emotiv has other ambitious plans for their product as they relate in this video –

https://vimeo.com/159560626

The geekiness of some these may remind you of Google Glass. Unlike Google Glass, though, they offer dramatic value
for people who have special but critical needs.  For that reason, I
expect some version of these will be developed further and will succeed.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/148399117751/head-tech]

Intelligent Conversations: The New User Interface?

Recently there have been some interesting articles about how the graphic user interface we’ve had on our screens for many years is gradually being replaced by a new user interface – the conversation.  

Earlier this month, Matt Gilligan wrote on his Medium blog

Forget “there’s an app for that” — what’s next is “there’s a chat for that.”

And just a few days ago, WIRED magazine had an article titled “The Future of UI Design? Old-School Text Messages”.

Some of this is a result of the fact that people are more often using the web on their smart phones and tablets than on laptops and desktop computers.  With bigger screens, the older devices have more room for a nice graphic interface than smartphones – even the newest smart phones that always seem to be bigger than the previous generation.

And many people communicate much of the day through conversations that are composed of text messages.  There’s a good listing of some of the more innovative text apps in “Futures of text”.

The idea of a conversational interface is also a reflection of the use of various personal assistants that you talk to, like Siri.  These, of course, have depended on developments in artificial technology, in particular the recognition and processing of natural (human) spoken language.  Much research is being conducted to make these better and less the target of satire – like this one from the Big Bang Theory TV series.

There’s another branch of artificial intelligence research that should be resurrected from its relative oblivion to help out – expert systems.  An expert system attempts to automate the kind of conversation – especially a dynamic, intelligent sequence of questions and answers – that would occur between a human expert and another person.  (You can learn more at Wikipedia and GovLab.)

image

In the late 1980s and early 1990s, expert systems were the most hyped part of the artificial intelligence community.  

As I’ve blogged before, I was one of those involved with expert systems during that period.  Then that interest in expert systems rapidly diminished with the rise of the web and in the face of various technological obstacles, like the hard work of acquiring expert knowledge.   More recently, with “big data” being collected all around us, the big focus in the artificial intelligence community has been on machine learning – having AI systems figure out what that data means.

But expert systems work didn’t disappear altogether.  Applications have been developed for medicine, finance, education and mechanical repairs, among other subjects.

It’s now worth raising the profile of this technology much higher if the conversation becomes the dominant user interface.  The reason is simple: these conversations haven’t been very smart.  Most of the apps are good at getting basic information as if you typed it into a web browser.  Beyond that?  Not so much.

There are even very funny videos of the way these work or rather don’t work well.  Take a look at “If Siri was your mom”, prepared for Mother’s Day this year with the woman who was the original voice of Siri as Mom.  

In its simplest form, expert systems may be represented as a smart decision tree based on the knowledge and research of experts.

image

It’s pretty easy to see how this approach could be used to make sure that the conversation – by text or voice – is useful for a person.

There is, of course, much more sophistication available in expert systems than is represented in this picture.  For example, some can handle probabilities and other forms of ambiguity.  Others can be quite elaborate and can include external data, in addition to the answers from a person – for example, his/her temperature or speed of typing or talking.

The original developers of Siri have taken what they’ve learned from that work and are building their next product.  Called “Viv: The Global Brain”, it’s still pretty much in stealth mode so it’s hard to figure out how much expert system intelligence is built into it.  But a story about them on WIRED last year showed an infographic which implies that an expert system has a role in the package.  See the lower left on the second slide.

image
image

Personally I like the shift to a conversational interface with technology since it becomes available in so many different places and ways.  But I’ll really look forward to it when those conversations become smarter.  I’ll let you know as I see new developments.

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/122879360432/intelligent-conversations-the-new-user-interface]

New Ways You Will Interact With Cyberspace?

Last week I gave some examples of wearable technology.  This week the focus goes beyond wearables to other ways of interacting with cyberspace.  And, as I said last week, some or all of these products may never become commercially viable, but they give you an idea of where things are headed.

The folks at Ubi announced two months ago that they have left the crowdfunding and development stage and released their “ubiquitous computer” (at least from within eight feet).  There’s no display screen, no mouse, no keyboard – you only interact with Ubi via your voice and its voice.

image

Sweden’s ShortCutLabs have designed the Flic button to be used as an all-purpose remote control for all the things in your house that could be controlled remotely.  That includes your smart phone, your lights, and even somehow ordering a pizza.  See their Indiegogo video at http://youtu.be/MDsjBh2xOgQ

image

Of course, if you want a really universal, but physical, remote control, then you’ll have to depend on your hand.  With that in mind, Onecue proposes that you control “your media and smart home devices using simple touch-free gestures” of your hand.  Their pre-order video is at http://youtu.be/_1QnnRn47r0

From Berlin, Senic have offered up Flow as a more general replacement for the computer mouse, which is also based on gestures.  What’s intriguing about their work is the fact that they have worked on dozens of interfaces to various products and it is open for use by other developers. Their Indiegogo video is at http://vimeo.com/112589339 

image

Three months ago, University of Washington researchers demonstrated how hand gestures could control your smart phone.

And, even as driverless cars are being perfected, there is still interest in enhancing the blended virtual and physical experience of humans driving cars.   For example, Visteon, a long time supplier of “cockpit electronics” to the auto industry, recently announced its development of the HMeye Cockpit, which it describes as:

“an automotive cockpit concept demonstrating how drivers can select certain controls through eye movement and head direction. Hidden eye-tracking cameras capture this data to deliver an advanced human-machine interaction (HMI).”

Intel has been working more generally on smart cameras with depth sensing.  Its RealSense technology will start to have various applications early this year, some of which they showed off at the CES show last week, as reported by the Verge.

Haptics – touching and feeling your connection with technology – is one of the newer frontiers of user interface research.

From the Shinoda-Makino lab at the University of Tokyo comes HaptoMime, a “Mid-air Haptic Virtual Touch Panel” that gives tactile feedback.  Using ultrasound, it gives the user the sense of interacting with a floating holographic-like image.  You can read more in New Scientist and see the lab’s video at https://www.youtube.com/watch?v=uARGRlpCWg8

Finally, a few weeks ago, computer scientists at the University of Bristol announced their latest advance in enhancing the real world:

“Technology has changed rapidly over the last few years with touch feedback, known as haptics, being used in entertainment, rehabilitation and even surgical training. New research, using ultrasound, has developed a virtual 3D haptic shape that can be seen and felt.”

You can see their demonstration at http://youtu.be/4O94zKHSgMU

image

These same scientists two months ago also announced a clever use of mirrors:

“In a museum, people in front of a cabinet would see the reflection of their fingers inside the cabinet overlapping the exact same point behind the glass. If this glass is at the front of a museum cabinet, every visitor would see the exhibits their reflection is touching and pop-up windows could show additional information about the pieces being touched.”

The mouse and keyboard are so last century!

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/108077927996/new-ways-you-will-interact-with-cyberspace]

Does Your Website Talk To People The Way They Think?

(This blog post is a broadening of the recent post on gamification.)

Almost all governments have some kind of website.  Aside from when these sites just don’t work because of bad links or insufficient computer resources to meet demand, they mostly feel like an electronic version of old style government whose employees were often accused of treating other people as “just a number”. These websites talk at people in a kind of monotone, not having a conversation or interaction.

Yet, most of us realize that people have different interests, personalities, cognitive styles and ways of interacting with others.  Thus, to be most effective,  a website should change to reflect who is interacting with it.  

Unfortunately, the only variability that exists in most websites – public or private sector – is usually based on purchasing patterns, such as the different web pages and pricing that appear on Amazon’s website, depending upon your past consumer behavior or perhaps by providing languages other than English.

Glen Urban, who is a marketing professor at MIT’s Sloan School of Management, calls this the “empathetic Web”.  (See the article, “Morph the Web To Build Empathy, Trust and Sales” by him and his colleagues at http://sloanreview.mit.edu/article/morph-the-web-to-build-empathy-trust-and-sales/)

As their summary states:

We’ve long been able to personalize what information the Internet tells us — but now comes “Web site morphing,” and an Internet that personalizes how we like to be told. For companies, it means that communicating — and selling — will never be the same.

The authors distinguish between people on the basis of two pairs of cognitive preferences (visual vs. verbal and analytic vs. holistic).  At the very least, a website should reflect these cognitive differences.  

But it is also worth thinking about other differences. For example, many people prefer a conversational style to the completion of a long form.  The widespread use of smart phones to access the Internet has increased the need to have a more conversational style on the web since the screen is too small to do otherwise.  (That’s why games are a useful model to consider.)  

As the authors note, this is not just a matter of making a website more convenient, but also is essential in building trust, which helps a private company increase sales – and is an absolute requirement for any public official.

© 2013 Norman Jacknis

[http://njacknis.tumblr.com/post/64296674666/does-your-website-talk-to-people-the-way-they-think]