Robots Just Want To Have Fun!

There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.

In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.

Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.

Robots are also showing off their fun and increasingly creative side.

Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.

Dancing

Last August, there was a massive robot dance in Guangzhou, China. It achieved a Guinness World Record for for the “most robots dancing simultaneously”. See https://www.youtube.com/watch?v=ouZb_Yb6HPg or http://money.cnn.com/video/technology/future/2017/08/22/dancing-robots-world-record-china.cnnmoney/index.html

Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc

Music

Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.

The Flow Machines project in Europe has also been using AI to create music in the style of more famous composers. See, for instance, its DeepBach, “a deep learning tool for automatic generation of chorales in Bach’s style”. https://www.youtube.com/watch?time_continue=2&v=QiBM7-5hA6o

Singing

Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s

https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s

Also, last August, there was a song composed by AI, although sung by a human – https://www.youtube.com/watch?v=XUs6CznN8pw&feature=youtu.be

There is even AI that will generate poetry – um, song lyrics.

Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:

Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.

Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.

Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.

An altered state of manic episodes,
A journey through the long and winding roads.

Not exactly upbeat, but you could well imagine this being a song too.

Finally, there is even the HRP-4C (Miim), which has been under development in Japan for years. Here’s her act –  https://www.youtube.com/watch?v=QCuh1pPMvM4#t=3m25s

All singing, all dancing, indeed!

© 2018 Norman Jacknis, All Rights Reserved

Augmented Reality Rising

Last week, I gave a presentation at the Premier CIO Summit in Connecticut on the Future of User Interaction With Technology, especially the combined effects of developments in communicating without a keyboard, augmented reality (AR) and machine learning.  I’ve been interested in this for some time and have written about AR as part of the Wearables movement and what I call EyeTech.

First, it would help to distinguish these digital realities. In virtual reality, a person is placed in a completely virtual world, eyes fully covered by a VR headset – it’s 100% digital immersion. It is ideal for games, space exploration, and movies, among other yet to be created uses.

With augmented reality, there is a digital layer that is added onto the real physical world. People look through a device – a smartphone, special glasses and the like – that still lets them see the real things in front of them.

Some experts make a further distinction by talking about mixed reality in which that digital layer enables people to control things in the physical environment. But again, people can still see and navigate through that physical environment.

When augmented was first made possible, especially with smartphones, there were a variety of interesting but not widespread uses. A good example is the way that some locations could show the history of what happened in a building a long time ago, so-called “thick-mapping”.

There were business cards that could popup an introduction and a variety of ancillary information that can’t fit on a card, as in this video.

There were online catalogs that enabled consumers to see how a product would fit in their homes. These videos from Augment and Ikea are good examples of what’s been done in AR.

A few years later, now, this audience was very interested in learning about and seeing what’s going on with augmented reality. And why not? After a long time under the radar or in the shadow of Virtual Reality hype, there is an acceleration of interest in augmented (and mixed) reality.

Although it was easy to satirize the players in last year’s Pokémon Go craze, that phenomenon brought renewed attention to augmented reality via smart phones.

Just in the last couple of weeks, Mark Zuckerberg at the annual Facebook developers conference stated that he thinks augmented reality is going to have tremendous impact and he wants to build the ecosystem for it. See https://www.nytimes.com/2017/04/18/technology/mark-zuckerberg-sees-augmented-reality-ecosystem-in-facebook.html

As beginning of the article puts it:

“Facebook’s chief executive, Mark Zuckerberg, has long rued the day that Apple and Google beat him to building smartphones, which now underpin many people’s digital lives. Ever since, he has searched for the next frontier of modern computing and how to be a part of it from the start.

“Now, Mr. Zuckerberg is betting he has found it: the real world. On Tuesday, Mr. Zuckerberg introduced what he positioned as the first mainstream augmented reality platform, a way for people to view and digitally manipulate the physical world around them through the lens of their smartphone cameras.”

And shortly before that, an industry group – UI LABS and The Augmented Reality for Enterprise Alliance (AREA) – united to plot the direction and standards for augmented reality, especially now that the applications are taking off inside factories, warehouses and offices, as much as in the consumer market. See http://www.uilabs.org/press/manufacturers-unite-to-shape-the-future-of-augmented-reality/

Of course, HoloLens from Microsoft continues to provide all kinds of fascinating uses of augmented reality as these examples from a medical school or field service show.

Looking a bit further down the road, the trend that will make this all the more impactful for CIOs and other IT leaders is how advances in artificial intelligence (even affective computing), the Internet of Things and analytics will provide a much deeper digital layer that will truly augment reality. This then becomes part of a whole new way of interacting with and benefiting from technology.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

Affective Computing

One of the more interesting technologies that has been developing is called affective computing. It’s about analyzing observations of human faces, voices, eye movements and the like to understand human emotions — what pleases or displeases people or merely catches their attention.  It combines deep learning, analytics, sensors and artificial intelligence.

While interest in affective computing hasn’t been widespread, it may be nearing its moment in the limelight. One such indication is that the front page of the New York Times, a couple of days ago, featured a story about its use for television and advertising. The story was titled “For Marketers, TV Sets Are an Invaluable Pair of Eyes.”

But the companies that were featured in the Times article are not the only ones or the first ones to develop and apply affective computing. IBM published a booklet on the subject in 2001.  Before that, in 1995, the term “affective computing” was coined by Professor Rosalind Picard of MIT, who also created the affective computing group in the MIT Media Lab.

In a video, “The Future of Story Telling”, she describes what is essentially the back story to the New York Times article.  In no particular order, among other companies working with this technology today, there are Affectiva, Real Eyes, Emotient, Beyond Verbal, Sension, tACC, nVisio, CrowdEmotion, PointGraB, Eyeris, gestigon, Intel RealSense, SoftKinetic, Elliptic Labs, Microsoft’s VIBE Lab and Kairos.

Affectiva, which Professor Picard co-founded, offers an SDK that reads emotions of people at home or in the office just by using web cams.  Here’s a video that shows their commercially available product at work: https://www.youtube.com/watch?v=mFrSFMnskI4

Similarly, Real Eyes also
offers a commercial product that analyzes the reactions of what people
see on their screens. Here’s their video about real-time facial coding: https://www.youtube.com/watch?v=3WF4eG1s44U&list=PL1F3V-C5KJZAxl8OGF0NjbG8WHTTE6_hX

The
two previous products have obvious application to web marketing and
content. So much so, that some predict a future in which affective
technology creates an “emotion economy”.

But affective computing
has longer term applications, most especially in robotics. As human-like
robots, especially for an aging population in Asia, begin to be sold as
personal assistants and companions, they will need to have the kind of
emotional intelligence about humans that other human beings mostly have
already. That’s likely to be where we will see some of the most
impactful uses of affective computing.

Over the last couple of
years, Japan’s Softbank has developed Pepper, which they describe as a
“social robot” since it aims to recognize human emotion and shows its
own emotions. Here’s the French software company behind Pepper  — https://www.youtube.com/watch?v=nQFgGS8AAN0

There
are others doing the same thing. At Nanyang Technological University,
Singapore, another social robot, called Nadine, is being developed.  See
https://www.youtube.com/watch?v=pXg33S3U_Oc

Both
these social robots and affective computing overall still needs much
development, but already you can sense the importance of this
technology.

© 2017 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/157863647250/affective-computing]

Run Of The River?

Dams that produce hydropower have been one of the longest established renewable energy sources in the US for a long time. The American industrial revolution started in places, like Massachusetts, with abundant free flowing rivers that were tapped for their energy to power early factories.

Hydropower is still the largest source of renewable energy, accounting for a bit under half of the total.

A few years ago, I was involved with a project that was intended to revive one of those early industrial cities, Holyoke, Massachusetts. The city still had one of the few operating dams left and it supplied local electric power at a significant discount compared to elsewhere in the state. So the idea developed of creating local jobs by building a data center in Holyoke as a remote cloud location for major universities and businesses in the Boston area. (Driving distance between the two is about 90 miles.)

Putting aside whether a data center can be a significant job creator like old-time car plants, it struck me that the state as a whole would benefit by using the water resources there, thus bringing down a relatively high cost for electricity in a digital age. Of course, river resources are present in many other states, particularly east of the Mississippi River and in the northwest.

Thus, at one meeting with representatives of the research facilities of Harvard and MIT, I asked a simple question. When was the last time that engineering or science researchers took a serious look at using better materials or designs to improve the efficiency of the turbines that the water flows through or finding replacements for turbines (like the VIVACE hydrokinetic energy converter shown here)?

image

Despite or maybe because of the Three Gorges Dam project in China and similar projects, hydropower from dams has diminished in popularity in the face of various environmental concerns. Yet the rivers still flow and contain an enormous amount of energy and giant dams don’t have to be the only way to capture that energy.

With that in mind, I also asked if they had looked at the possibility of designing smaller turbines so that smaller rivers could be tapped without traditional dams. Some variations of this idea are called “run of the river”. (Because of the variability of river flows, this version of hydropower doesn’t produce a consistent level of energy like a coal-burning plant. As with other renewables, it too will need more efficient and cost-effective means of storing electricity – batteries, super-capacitors, etc.)

The quizzical stares I received could most diplomatically be translated as “Why would we do that?  Hydraulic engineering is centuries old and has been well established”. However, the sciences of materials and fluid dynamics is dramatically better now than it was even seventy or a hundred years ago and it calls for a much stepped up effort in new hydraulic engineering than has taken place. Periodically, the experts publicly say this as in “Hydraulic engineering in the 21st century: Where to?

As it turned out, a year or two later in 2011/2012, there was a peak of activity in hydropower experiments in the UK, Germany, Canada, Japan, and India. Here are just some of the more interesting examples:

·        Halliday Hydropower’s Hydroscrew

image

·        The Hydro Cat, free floating

image

·        Blue Freedom’s “world’s smallest hydropower plant” is intended primarily for small mobile devices as their slogan says “1 hour of Blue Freedom in the river. 10 hours of power for your smartphone”

image

·        In an unusual twist on this topic, Lucid Energy harnessed the power of water flowing through urban pipes.

image

These were interesting prototypes, experiments and small businesses, but without the kind of academic and financial support seen in the IT industry, these don’t seem to have the necessary scale to make an impact – notwithstanding the release two months ago of a Hydropower vision paper by the US Department of Energy. I’d love to be corrected on this observation.

Perhaps this is another example of a disruptive technology, in the way that its creator, Clayton Christensen, originally defined the term. Disruptive technologies start to be used at the low end of the market where people have few or no other choices – places like India and the backcountry of advanced economies which are poorly served by the electrical grid, if at all. Only later, possibly, will these products be able to go upmarket.

Too much of the discussion about disruptive technologies has been limited to information technology. There can be disruptive technologies in other fields to solve problems that are just as important, perhaps more important, than the ones that app programmers solve – like renewable energy.

Only time will tell if the technology and markets develop sufficiently so that run of the river and similar hydropower becomes one of the successful disruptive technologies.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/151053869979/run-of-the-river]

Eye Tech

Last month, I wrote about Head Tech – technology that can be worn on the head and used to control the world about us. Most of those products act as an interface between our brain waves and devices that are managed by computers that are reading our brain waves.

The other related area of Head Tech recognizes the major role of our eyes literally as windows to the world we inhabit.

Google may have officially sidelined its Glass product, but its uses and products like it continue to be developed by a number of companies wanting to demonstrate the potential of the idea in a better way than Google did. There are dozens of examples, but, to start, consider these three.

Carl Zeiss’s Smart Optics subsidiary accomplished the difficult technical task of embedding the display in what looks to everyone like a pair of regular curved eyeglasses. Oh, and they could even be glasses that provide vision correction. Zeiss is continuing to perfect the display while trying to figure out the business challenge of bringing this to market.

image

You can see a video with the leader of the project at https://youtu.be/MSUqt8M0wdo and a report from this year’s CES at https://www.youtube.com/watch?v=YX5GWjKi7fc

Also offering something that looks like regular glasses and is not a fashion no-no is LaForge Optical’s Shima, which has an embedded chip so it can display information from any app on your smartphone. It’s in pre-order now for shipment next year, but you can see what they’re offering in this video. A more popular video provides their take on the history of eye glasses.

While Epson is not striving to devise something fashionable, it is making its augmented reality glasses much lighter. This video shows the new Moverio BT-300 which is scheduled to be released in a few months.

Epson is also tying these glasses to a variety of interesting, mostly non-consumer, applications. Last week at the Interdrone Conference, they announced a partnership with one of the leading drone companies, DJI, to better integrate the visuals coming from the unmanned aerial camera with the glasses.

DAQRI is bringing to market next month an updated version of its Smart Helmet for more dangerous industrial environments, like field engineering.  Because it is so much more than glasses, they can add all sorts of features, like thermal imaging. It is a high end, specialized device, and has a price to match.

At a fraction of that price, Metavision has developed and will release “soon” its second generation augmented reality headset, the Meta 2. Its CEO’s TED talk will give you a good a sense of Metavision’s ambitions with this product.

image

Without a headset, Augmenta has added recognition of gestures to the capabilities of glasses from companies like Epson. For example, you can press on an imaginary dial pad, as this little video demonstrates.

This reminds me a bit of the use of eye tracking from Tobii that I’ve included in presentations for the last couple of years. While Tobii also sells a set of glasses, their emphasis is on tracking where your eyes focus to determine your choices.

One of the nice things about Tobii’s work is that it is not limited to glasses. For example, their EyeX works with laptops as can be seen in this video. This is a natural extension of a gamer’s world.

Which gets us to a good question: even if they’re less geeky looking than Google’s product, why do we need to wear glasses at all? Among other companies and researchers, Sony has an answer for that – smart, technology-embedded contact lenses. But Sony also wants the contact lens to enable you to take photos and videos without any other equipment, as they hope to do with a new patent.

So we have HeadTech and EyeTech (not to mention the much longer established EarTech) and who knows what’s next!

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/150401402048/eye-tech]

Head Tech

The discussion about wearable technology recently has mostly been
about various devices, like watches and bands, that we wear on our
wrists to communicate, measure our health, etc.  But from a
technological perspective, if not yet a commercial viewpoint, these are
old hat.

How about some new hats?  Like this one …

image

These more interesting – and maybe a bit more eerie – wearables are
what I’d call “Head Tech”.  That’s technology that we place on our
heads.

Last year, following along the lines of various universities such as the University of Minnesota, the Portuguese firm Tekever demonstrated
Brainflight which enabled a person to control the flight of a drone
through the thoughts of someone wearing an electroencephalogram (EEG)
skull cap with more than a hundred electrodes.  Here’s the BBC report –
https://www.youtube.com/watch?v=8LuImMOZOo0

This has become a fascination of so many engineers that a few months ago the University of Florida held the first brain-controlled drone race.  Its larger goal was to popularize the use of brain-computer interfaces.

Of
course, anything that gets more popular faces its critics and
satirists.  So one of GE’s more memorable commercials is called
BrainDrone – https://www.youtube.com/watch?v=G0-UjJpRguM

Not to be outdone, a couple of weeks ago, the Human-Oriented Robotics and Control Lab at Arizona State University unveiled
a system to use that approach to control not only a single drone, but a
swarm of drones.  You can see an explanation in this video – https://vimeo.com/173548439

While
drones have their recreational and surveillance uses, they’re only one
example.  Another piece of Head Tech gear comes from Smartstones, working with Emotiv’s less medical-looking EEG.

image

It enables people who are unable to speak to use their minds to communicate.  As they describe it:

“By
pairing our revolutionary sensory communication app :prose with an EEG
headset powered by Emotiv, we are enabling a thought-to-speech solution
that is affordable, accessible and mobile for the first time ever. Users
can record and command up to 24 unique phrases that can be spoken aloud
in any language.”

There’s a very touching video here – https://vimeo.com/163235266

Emotiv has other ambitious plans for their product as they relate in this video –

https://vimeo.com/159560626

The geekiness of some these may remind you of Google Glass. Unlike Google Glass, though, they offer dramatic value
for people who have special but critical needs.  For that reason, I
expect some version of these will be developed further and will succeed.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/148399117751/head-tech]

Updates Of Earlier Reports

Some of my blog posts seem to be ahead of news reported elsewhere,
which is ok with me, but also means that it might be helpful to list
some interesting articles that continue past stories.  Here are some
recent examples:

  • My two-part series in March on the Coding Craze
    questioned the long term value of the plan by many public officials to
    teach computer coding. While the general news media continue to talk and
    write about coding as an elixir for your career, WIRED Magazine
    recently ran a cover story titled “The End of Code”.  See their web
    piece at http://www.wired.com/2016/05/the-end-of-code/
image
  • I’ve
    written several posts on one of my special interests – the related
    subjects of mixed reality, virtual reality, blended physical and digital
    spaces. I noted sports as a natural for this, including highlighting the Trilite project last year.  So it was great to read
    the announcement in the last few days that NBC and Samsung are
    collaborating to offer some of the Rio Olympics on Samsung VR gear.
image
  • We’re
    all inundated with talk about how “things are changing faster than ever
    before” in our 21st century world. Taking an unconventional view, in
    2011, I asked “Telegraph vs. Internet: Which Had Greater Impact?
    My argument was that the first half of the 19th century had much more
    dramatic changes, especially in speeding up communications.  In what I
    think is the first attempt to question the fastest-ever-changes meme,
    the New York Times Magazine also recently elaborated on this theme in an
    Upshot article titled “What Was the Greatest Era for Innovation? A Brief Guided Tour”.
image
  • In “Art and the Imitation Game”,
    March 2015, I wrote about how artificial intelligence is stepping into
    creative activities, like writing and painting. While there have been
    many articles on this subject since, one of the most intriguing was from
    the newspaper in the city with more attorneys per capita than anywhere
    else, as the Washington Post invited us to “Meet ‘Ross,’ the newly hired legal robot”.
image
  • I wrote about the White House Rural Telehealth meeting in April this year. The New York Times later had a report on the rollout of telehealth to the tens of millions of customers of Anthem, under the American Well label.
  • Going
    back several years and in both that post and one on “The
    Decentralization Of Health Care” about a year and a half ago, I’ve
    touched on the difficulties posed by the fee for service health care
    system in the US and instead wondered if we would be better off by
    paying health systems a yearly fee to keep us healthy – thus aligning
    our personal interests with those of the system. So it has been
    interesting to see in April that there was movement on this by the
    Center for Medicare and Medicaid Services (CMMS), which is the Federal
    government’s health insurance agency.  Here are just some examples:
  1. The End of Fee For Service?  
  2. CMS launches largest-ever multi-payer initiative to improve primary care in America
  3. Obamacare [SIC] to launch new payment scheme

That’s it for now.  I’ll try to update other posts when there’s news.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/146994917695/updates-of-earlier-reports]

Was It The Year Of The Wearables?

At the very beginning of this year, I posted about the popular theme then that 2015 would be the year of wearable technology. Has it been?

Last week, Fortune magazine reported:

“While the Apple Watch has carved out a sizable chunk of the wearable market share this year, the number-one manufacturer of these devices, Fitbit, remains the same. According to IDC’s latest numbers, in the third quarter, overall wearable device shipments were as high as 21 million units worldwide — a growth of 197.6% year-over-year. And this year’s launch of the Apple Watch has contributed to the increase, with IDC reporting 3.9 million units of the iPhone-connected device shipping in the third quarter.”

So the sales of Fitbit and the Apple Watch are good. I even received a Fitbit as a present and wear it — although not all the time. (I’m also not sure that carrying my iPhone on my belt counts as a wearable 🙂

It’s fair to say that we’re still not at the point where most people are wearing these devices. The numbers are bound to increase, though, as the products improve and new ones, like the Oculus Rift, become available.

Nevertheless, it was a year of great creativity by inventors and designers of new, sometimes even fun, wearables. Many have only been made public in the last month or so. Let’s take a look.

Glasses — Augmented Reality Devices

While Google withdrew its Glass product, some interesting applications arose anyway. Last month, the Canadian Journal of Cardiology posted online a proof-of-concept study, in which the physicians found:

“The projection of 3-dimensional computed tomographic reconstructions onto the screen of virtual reality glass allowed the operators to clearly visualize the distal coronary vessel.”

Also, a few weeks ago, Volkswagen announced that, after a pilot test phase, they would equip the workers in their Wolfsburg plant with “3-D smart glasses”. One of the plant executives noted “The 3D smart glasses take cooperation between humans and systems to a new level.”

Of course, one of the issues that Google ran into is that these glasses look geeky. To address that problem, a spinoff of VTT in Finland has developed and will release an alternative little screen that fits onto regular eyeglasses and provides a virtual display equivalent to 60 inches.

image

The Wall St. Journal reported last month that NEC “has created a user interface which can display an augmented-reality keyboard on a person’s forearm, using eyeglasses and a smart watch”, thus extending both technologies. (You can see a video here.)

image

Smart Clothing

Perhaps the most interesting, but least reported, products are essentially smart clothing — truly wearable technology 😉

The engineers at Thalmic Labs continue to develop the Myo with their armband that understands your gestures to control the actions of a computer. It had its general release this year and the company is encouraging an app market for it.

They were not alone. Among others, Apotact Labs completed a successful Kickstarter campaign at the end of last month for its Gest product. They promise it will track gestures much more accurately by monitoring your fingers and hands, as shown here.

Taking gesture tracking into a somewhat different direction, researchers at the University of Auckland wrote a paper about their

“soft, flexible and stretchable keyboard made from a dielectric elastomer sensor sheet … [that] can detect touch in two dimensions, programmable to increase the number of keys and into different layouts, all without adding any new wires, connections or modifying the hardware.”

image

In May at their annual I/O conference, Google release a video and information about its Project Jacquard, “a new system for weaving technology into fabric, transforming everyday objects, like clothes, into interactive surfaces.” They apparently have a partnership with Levi Strauss to use this fabric, so maybe someday you won’t ever have to take your smartphone out of the back pocket of your jeans.

Then in June, the Engineering School of the University of Tokyo announced that it had

“developed a new ink that can be printed on textiles in a single step to form highly conductive and stretchable connections. This new functional ink will enable electronic apparel such as sportswear and underwear incorporating sensing devices for measuring a range of biological indicators such as heart rate and muscle contraction.”

image

You can see their video about it here.

Sensoria, best known for helping runners with its smart sock, teamed up with Orthotics Holdings to announce a new product for 2016 — the Internet-connected Smart Moore Balance Brace that is intended to help seniors avoid falling. That’s a significant issue for about a third of seniors every year, which often happens outside the sight of physicians who can only guess what might have happened. With the Internet connection, this device can report various key aspects of a senior’s walking.

The Next Generation May Already Be Starting

While the wearables market has not yet peaked, Reuters already had an article that predicted, as its headline said: “As Sensors Shrink, Watch As ‘Wearables’ Disappear”.

It opened up this way:

“Forget ‘wearables’… The next big thing in mobile devices: ‘disappearables’.

“Even as the new Apple Watch piques consumer interest in wrist-worn devices, the pace of innovation and the tumbling cost, and size, of components will make wearables smaller — so small, some in the industry say, that no one will see them.

“Within five years, wearables like the Watch could be overtaken by hearables — devices with tiny chips and sensors that can fit inside your ear. They, in turn, could be superseded by disappearables — technology tucked inside your clothing, or even inside your body.”

I’ll follow up on that last point in a future post, but I’m taking off for the holidays, so this is my last post for the year. I wish all my readers a very happy holiday season and a great new year!

© 2015 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/135319250466/was-it-the-year-of-the-wearables]

Reality Becomes More Blended

I’ve blogged before about the blending of the digital and physical to create new kinds of hybrid spaces – here, here, here, here and here.   This blending opens up all kinds of possibilities for new experiences in cities, in entertainment, education, and elsewhere.

(We, at the Gotham Innovation Greenhouse, hope to move this forward as well in a few months – but that’s another story.)

Of course, blended reality is not yet part of the everyday lives of most people.

That
hasn’t stopped creative technologists. There have been several
developments this year which illustrate new kinds of blended reality.  

Here are some that caught my attention.

HoloLens

Earlier this year, Microsoft announced its HoloLens
and “Windows Holographic Platform”. The HoloLens is their version of
virtual reality that lets you see three-dimensional holographic images
in the space around you. 

image

While the obvious gaming uses have already been reported
on, there are other uses, for example, enabling a person to envision
what a building interior would look like from the viewpoint of someone
walking inside. The latter was shown at the Architecture and Design Film Festival.

Of course, since this is a Microsoft product, there have been both enthusiastic and not-so-enthusiastic reviews, sometimes in the same magazine, as you can see if you follow the links.

MagicLeap

Microsoft
also doesn’t always provide leading edge technologies, so there has
been much interest – but not too much information – about a
Florida-based company called Magic Leap which is going beyond the
HoloLens. The company proclaims its aim to mix the physical and virtual
worlds.

image

It has received investments from major competitors of Microsoft,
but until two weeks ago didn’t show much. Then it released a video
showing a person interacting with virtual objects, robots and the solar
system. The company has assured everyone that the video was recorded
just as it happened, without special effects.

Magic Leap’s CEO, Rony Abovitz, was reported
to say

“We are sensing the world — the floor, the people. We’re doing
real-time understanding of the world, so that all these objects can know
where they sit.”

While Microsoft and Magic Leap seem to be mainly
focused on blending reality indoors, others are demonstrating ways that
the virtual and physical can work together outdoors or both indoors and
outdoors.

Samsung

In June, Samsung showed
the latest generation of see-through 55 inch OLED display that seems to
be able to really handle high transparency and strong light going
through it. To add to the feeling, the company has combined it with
Intel’s technology to see and understand what a person is doing with
what’s on the screen.

image

Trilite

While Microsoft and Oculus, among
others, are busy creating devices you can wear over your eyes, in a
sense the real challenge is to create 3D illusions without the need for
glasses. The Technical University of Vienna and Trilite announced their prototype earlier this year.

image

As they describe it:

“A sophisticated laser system sends laser
beams into different directions. Therefore, different pictures are
visible from different angles. The angular resolution is so fine that
the left eye is presented a different picture than the right one,
creating a 3D effect… 3D movies in the cinema only show two different
pictures – one for each eye. The newly developed display, however, can
present hundreds of pictures. Walking by the display, one can get a view
of the displayed object from different sides, just like passing a real
object.”

2wenty Lightworks

More ephemeral, but more dramatic, is the work of the Los Angeles light and media artist, 2wenty, who has created dancing lightworks, such as this:

image

Pixelstick

And for those us who want to paint with light, there is Pixelstick,
which contains 200 full color LEDs controlled by a programmed SD card.
Although it started as a Kickstarter campaign two years ago, it has
graduated from that status.

image

There’s more background in a video at https://youtu.be/TjXvqfWfRi4

This
blending of the physical and the virtual, of art and technology, is at
the very least a lot of fun. More later on its broader significance.

© 2015 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/132476393322/reality-becomes-more-blended]

Robots Like Humans  —  Or Not?

There has been great interest in robots that seem to act like human, not just in the movies, but also in technology news. So much so, that the big debate in the robot world would seem to be how much we should program our robots to be like us.

Previously, I’ve blogged about machines that create art, poetry and even news reports. While those are all intellectual exercises that people might think “smart” machines could do, there are also robots from Japan, of course, that can dance — maybe break dance — as you can see in this video from earlier this year.

(It’s worth noting that much of this leading edge robotics of this
kind is coming from Japan, perhaps in the face of a declining and aging
human population.)

Murata
has made dancing robotic cheerleaders, albeit to show how to control
and coordinate robots and not necessarily to set the dancing world on
fire. They too have a video to demonstrate the point.

image

Some Canadians sent a robot, called Hitchbot, hitchhiking, like a
college student seeing the world for the first time. More than a year
ago, I blogged
about its trip across Canada. Then two months ago, there were several
reports about how sad it was that HitchBot was beheaded by the criminal
elements who supposedly control the streets of Philadelphia at night.
The New York Times’s poignant headline was “Hitchhiking Robot, Safe in Several Countries, Meets Its End in Philadelphia”.  

Later substantial evidence was brought to light that media personalities were responsible. See “Was hitchBOT’s destruction part of a publicity stunt?

In any event, to make up for the loss of HitchBot, other Philadelphians built Philly Love Bots. Radio station WMMR promoted
their own version called Pope-Bot, in anticipation of the trip by Pope
Francis. It has survived the real Pope’s trip to Philly and has even
traveled around that area without incident.

image

Consider also sports, which has featured humans in contests with each
other for thousands of years – albeit aided, more recently, by very
advanced equipment and drugs.

Apparently, some folks now envision
sports contests fought by robots doing things humans do, but only
better. Cody Brown, the designer known for creating the visual
storytelling tool Scroll kit, sees a different kind of story. In TIME
Magazine, he suggested seven reasons “Why Robotic Sports Will One Day Rival The NFL”.

We also want robots to provide a human touch. Thinking of the needs of the elderly, RIKEN
has developed “a new experimental nursing care robot, ROBEAR, which is
capable of performing tasks such as lifting a patient from a bed into a
wheelchair or providing assistance to a patient who is able to stand up
but requires help to do so.”

image

The research staff at the Google Brain project have been developing a
chatbot that can have normal conversations with people, even on
subjects that don’t lend themselves to factual answers to basic
questions that are the staple of such robotic services – subjects like
the meaning of life. The chatbot learned a more human style by ingesting and analyzing an enormous number of conversations between real people.

Of
course, the desire to make robots and their ilk too much like humans
can backfire. Witness the negative reaction to Mattel’s Talking Barbie.

Indeed,
there are benefits if we don’t try to make robots in our human image –
although doing so might make us feel less like gods 🙂

At
Carnegie-Mellon, researchers decided that maybe it didn’t make sense to
put “eyes” on a robot’s head, the way human bodies do. As they announced a few days ago, instead, they put the eyes into the robot’s hands and that made the fingers much more effective.

We
ought to consider that, with ever growing intelligence, eventually
robots will figure it all out themselves. Researchers at Cambridge
University and the University of Zurich have laid the groundwork by
developing a robotic system
that evolves and improves its performance. The robotic system then
changes its own software so that the next generation is better.

As the lead researcher, Dr. Fumiya Iida, said:

“One
of the big questions in biology is how intelligence came about – we’re
using robotics to explore this mystery … we want to see robots that are
capable of innovation and creativity.”

And where that
leads will be unpredictable, except that it isn’t likely the robots will
improve themselves by copying everything we humans do.

© 2015 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/130549037815/robots-like-humans-or-not]

Better Driving?

image

There have been all kinds of fun new ways that technology has become embedded into cars to help drivers.

Last Friday, the New York Times had an article about Audi’s testing what might be described as a driver-assisted race car, going 120 miles per hour.

Just last month, Samsung demonstrated a way to see through truck on country roads in Argentina.   It was intended to help a driver know when it’s safe to pass and overtake the truck.  But, even those of us who get stuck in massive urban traffic jams, would love the ability to see ahead.   (See the picture above.)

Another version of the same idea was developed and unveiled last month by the Universitat Politècnica de València, Spain.  They call their version EYES and you can see a report about it at https://youtu.be/eUQfalxPK0o

image

There have been variations on this theme over the last year or so, but so far the deployment of the technology hasn’t happened on real roads for regular drivers.

But Ford Motor Company announced a couple of weeks ago that it will start to equip a car this year with split view cameras that let drivers see around corners.  They say it’s especially useful when backing into traffic.   This is supposed to be a feature of their worldwide fleet of cars by 2020.

image

In the old days, when a driver had to maneuver into a tight corner, he/she asked a friend to stand outside the car and provide instructions.  Now, Land Rover is helping the driver who is alone – without friends? – to get a better view and control the car at the same time by using a smart phone app.

image

Is this all a good thing?  The New York Times had this quote in its Audi story:

“At this point, substantial effort in the automotive community is focused on developing fully autonomous driving technology,” said Karl Iagnemma, an automotive researcher at M.I.T. “Far less effort is focused on developing methods to allow a driver to intuitively and safely interact with the highly automated driving vehicle.”

Nevertheless, while these features are surely helpful, on balance, they seem to me to be transitional technologies.  (Allen Wirfs-Brock provided this helpful slide on the subject.)

image

A good example was the enhancement of controls for elevator operators when the average passenger could press the very same automated buttons.  Or similarly, the attempt by horse-drawn carriage makers to keep up with auto makers until they firmly lost the battle a hundred years ago.  Maybe Polaroid cameras were the transitional technology between film that needed to be developed at a factory and pictures you can take on your phone.

image

The warning signs are there already.  In May 2015, WIRED magazine featured this story, “Google’s Plan to Eliminate Human Driving in 5 Years”.  

Also in May, Uber and its partner, Carnegie Mellon University, did a test drive of its first autonomous vehicle.  Of course, Uber’s plans and its role in disrupting the traditional taxi industry had already led to dire predictions like this one on the website of the CBS TV station in San Francisco: “How Uber’s Autonomous Cars Will Destroy 10 Million Jobs And Reshape The Economy by 2025”.

Based on their promise last October, pretty soon we should soon start to Tesla delivering a car that “will be able to self-drive 90 percent of the time”.

Indeed, taking this idea to its ultimate extreme conclusion, the Guardian reported a few months ago that Tesla’s CEO, Elon Musk, wants to ban human driving altogether.  They quote him as saying:

“You can’t have a person driving a two-tonne death machine”.

So while it will be fun, perhaps we’re just seeing the last gasp of human driving.

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/124749211027/better-driving]

Libraries & Open Publishing

My last post was about the fight over intellectual property.  A few weeks before that I wrote about what a book is in a digital age and suggested that librarians could become the equivalent of DJs for books.

Pulling those two themes together, this post is about what some libraries are already doing that can shift the balance in book publishing.

But, first a bit of history.  When public libraries were first established well over a hundred years ago, one of their primary responsibilities was purchase books on behalf of their community.  Then the community members could share all these books, without having to buy separate copies.

Until the mid-20th Century, this worked in favor of publishers since libraries were, in general, their most reliable market for books.  Libraries also helped build markets of readers that the publishers would sell to or that many people eventually bought the books they borrowed because they liked them so much.  The library was a kind of try-and-buy location.

As the industry grew, selling direct to an ever more educated public in the latter half of the 20th Century, many book publishers started thinking that libraries reduced their sales, rather than enhancing them.  But that was a battle the publishers had lost long ago and couldn’t do much about.  

Moreover, it is a moot point in this century when e-books have overtaking traditional print book publishing.  Even if that growth trend has slowed a bit recently, the battle between publishers and libraries has been renewed around e-books, not printed books.

The traditional publishers – the Big 5 – have taken an especially restrictive approach to e-books, perhaps in the hopes of turning away from the historical role that public libraries have played for printed books.  Until less than two years ago, some publishers even refused to sell e-books to libraries.  They still restrict the number of times an e-book could be lent or charge extraordinary prices for them.  

This pattern continues despite some good arguments that publishers could benefit from a more supportive relationship with libraries, as laid out by the marketing expert, David Vinjamuri.  

But any significant change, like e-books, can be a two-edged sword.  They may be an opportunity for big publishers to change the rules.  But they are also an opportunity for libraries.  

Unlike printed books, there is effectively no limitation on how many e-books a library can store.  And librarians have noticed that many of their patrons are writing e-books.  Much of the spectacular growth in e-books has been among self-published authors.  (Amazon even makes this easy with its Createspace service.)

With this background, there has developed a movement among libraries to become the publishing platform for authors or to, at least, partner with self-publishing services.

Although he lost by a little, one of the candidates in the election a few days ago for president of the American Library Association was Jamie LaRue, who has built his reputation in large part as a leader of the library publishing movement.  

There are already several interesting examples across the country.  The Los Gatos Public Library has joined with the Smashwords self-publishing company.  The Provincetown, Massachusetts library – proudly “Ranked #1 in the US by Library Journal” – has created its own self-publishing agency, Provincetown Press.

The much larger Los Angeles Public Library is using the Self-e platform from Library Journal and BiblioBoard.  The February 2015 issue of Library Journal quotes John Szabo, LAPL’s director and one of the most innovative national library leaders:

“We are and will continue to be a place for content creation… It’s a huge role for libraries. … I want to see our authors not just all over California but circulating from Pascagoula, MS, to Keokuk, IA.”

Too often, news of new library services does not get widely publicized and is only seen by those already patronizing libraries.  So it was helpful that LAPL’s platform for local authors was reported a couple of weeks ago in a publication they might well read – LA Weekly.  

With the Internet enabling easier collaboration and co-creation than ever before, as I’ve noted in this blog, we are also seeing examples of self-publishing that go beyond an individual author.  

Topeka Community Novel Project describes its ideal: “A community novel is one that is collaboratively conceptualized, written, illustrated, narrated, edited and published by members of your community.”

image

Publishing by academic libraries and other non-traditional publishers is an increasing factor in research, as well.  While it publishes papers that are peer-reviewed as in traditional journals, PLOS (Public Library of Science) is perhaps the best known adherent of “open access” publishing.  Open Access means that there are no restrictions on the use of the articles, available online, free to read.  

Academic journals and books have been very expensive and not all of that cost can be eliminated by this new approach.  For example, the peer review process still has to be managed.  However, the cost is much lower.   PLOS charges authors a relatively minimal fee.

Rebecca Kennison of Columbia University Libraries and Lisa Norberg of the Barnard College Library have plans to extend the PLOS model, with a more cooperative funding arrangement, to “A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences”.

Overall, all of the initiatives that I’ve highlighted here are a part of a digital age trend in which we’ll see more librarians going beyond being mere collectors of big publishing companies’ books to being curators and creators of content. 

image

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/118945402119/libraries-open-publishing]

New Ways You Will Interact With Cyberspace?

Last week I gave some examples of wearable technology.  This week the focus goes beyond wearables to other ways of interacting with cyberspace.  And, as I said last week, some or all of these products may never become commercially viable, but they give you an idea of where things are headed.

The folks at Ubi announced two months ago that they have left the crowdfunding and development stage and released their “ubiquitous computer” (at least from within eight feet).  There’s no display screen, no mouse, no keyboard – you only interact with Ubi via your voice and its voice.

image

Sweden’s ShortCutLabs have designed the Flic button to be used as an all-purpose remote control for all the things in your house that could be controlled remotely.  That includes your smart phone, your lights, and even somehow ordering a pizza.  See their Indiegogo video at http://youtu.be/MDsjBh2xOgQ

image

Of course, if you want a really universal, but physical, remote control, then you’ll have to depend on your hand.  With that in mind, Onecue proposes that you control “your media and smart home devices using simple touch-free gestures” of your hand.  Their pre-order video is at http://youtu.be/_1QnnRn47r0

From Berlin, Senic have offered up Flow as a more general replacement for the computer mouse, which is also based on gestures.  What’s intriguing about their work is the fact that they have worked on dozens of interfaces to various products and it is open for use by other developers. Their Indiegogo video is at http://vimeo.com/112589339 

image

Three months ago, University of Washington researchers demonstrated how hand gestures could control your smart phone.

And, even as driverless cars are being perfected, there is still interest in enhancing the blended virtual and physical experience of humans driving cars.   For example, Visteon, a long time supplier of “cockpit electronics” to the auto industry, recently announced its development of the HMeye Cockpit, which it describes as:

“an automotive cockpit concept demonstrating how drivers can select certain controls through eye movement and head direction. Hidden eye-tracking cameras capture this data to deliver an advanced human-machine interaction (HMI).”

Intel has been working more generally on smart cameras with depth sensing.  Its RealSense technology will start to have various applications early this year, some of which they showed off at the CES show last week, as reported by the Verge.

Haptics – touching and feeling your connection with technology – is one of the newer frontiers of user interface research.

From the Shinoda-Makino lab at the University of Tokyo comes HaptoMime, a “Mid-air Haptic Virtual Touch Panel” that gives tactile feedback.  Using ultrasound, it gives the user the sense of interacting with a floating holographic-like image.  You can read more in New Scientist and see the lab’s video at https://www.youtube.com/watch?v=uARGRlpCWg8

Finally, a few weeks ago, computer scientists at the University of Bristol announced their latest advance in enhancing the real world:

“Technology has changed rapidly over the last few years with touch feedback, known as haptics, being used in entertainment, rehabilitation and even surgical training. New research, using ultrasound, has developed a virtual 3D haptic shape that can be seen and felt.”

You can see their demonstration at http://youtu.be/4O94zKHSgMU

image

These same scientists two months ago also announced a clever use of mirrors:

“In a museum, people in front of a cabinet would see the reflection of their fingers inside the cabinet overlapping the exact same point behind the glass. If this glass is at the front of a museum cabinet, every visitor would see the exhibits their reflection is touching and pop-up windows could show additional information about the pieces being touched.”

The mouse and keyboard are so last century!

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/108077927996/new-ways-you-will-interact-with-cyberspace]

The Year Of Wearable Technology?

Readers of this blog know that I’ve been tracking the various ways that the now-traditional setup of screen/keyboard/mouse/computer is being replaced in a world where the network and computing is ubiquitous. 

This post reviews some of the more interesting recent ideas – proposed and realized – about how we’ll be interacting with cyberspace.  Obviously these technologies are still being perfected and some may not ever become commercially viable, but they give you an idea of where things are headed.

Let’s start with wearable technology, which I first wrote about more than a year ago.  The major tech research firm, Forrester, flatly declares that “In 2015, wearables will hit mass market”.   Moreover, as reported by Fierce Mobile, “Gartner Predicts By 2017, 30 Percent of Smart Wearables Will Be Inconspicuous to the Eye”. 

The Mota Smart Ring is just one of a number of examples of wearable tech, including jewelry.  The ring on your finger becomes your communications medium to your smart phone, etc. notifying you of new social media items, messages and the like.  Their pre-order video is at http://youtu.be/q5UbWcLmFn4

The latest (if as yet unrealized) vision of using your body as an interface comes from Cicret Bracelet which wants you to “make your skin your new tablet” as you can see in this video or in this picture:

image

Sometimes your hands and eyes are busy with other duties, so you need a way to see things without moving your line of sight.  That’s partly the idea behind Google glasses and a series of heads-up displays for various vehicles that have been developed and not generally successfully sold over the past few years.  The latest comes from the British company, Motorcycle Information System Technologies, with its BikeHUD, a heads-up display for motorcyclists.  As they put it:

“When we ride, there’s no room for distractions … We created BIKEHUD to enable us to keep our heads UP at all times. As bikers ourselves, we decided we should be watching the miles, not the dials.”

The real fun, of course, is playing with an even more virtual world.  Dexta Robotics unveiled their Dexmo exoskeleton for your hand so, among other things, you can better control your avatar in cyberspace.

“Dexmo is a wearable mechanical exoskeleton that captures your hand motion as well as providing you with force feedback. It breaks the barrier between the digital and real world and gives you a sense of touch.”

One of their more dramatic pre-order videos, showing bomb disposal, is at http://youtu.be/B1ZQSoBAP7o

image

Finally, if you think that Google Glass is geek wear, then consider Sony’s alternative – clip-ons, yes like the clip-on sunshades of yore.  They call this SmartEyeglass Attach!, as it is attached to regular (or shaded) glasses.

image

In its announcement, Sony points out that this product uses:

“High-Resolution Color OLED Microdisplay, a Micro-Optical Unit that brings out the full potential of the display’s high image quality, and a miniaturized control board with arithmetic processing capabilities on par with smartphones that was made possible by high-density packaging technology.”

See a Financial Times reporter trying out the SmartEyeglass at http://youtu.be/XY4uqG2f5qc

Although it’s not likely a response to Sony, Google has announced Google Cardboard so that you can “experience virtual reality in a simple, fun, and inexpensive way”.   The product is accurately named.  It is, after all, a cardboard box.

image

The Google Cardboard website has so many wordplays that it comes across like a prank.  But who knows?  Maybe the latest technology that people will be wearing will be a cardboard box 😉

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/107410721082/the-year-of-wearable-technology]

Technology Gets More Personal?

This is the last of my end of summer highlights of interesting tech news.  Aside from being interesting, perhaps they also illustrate how technology is getting more personal now.

  • The creative, Munich-based augmented reality company, Metaio, showed off a way it can turn any surface into an augmented reality touchscreen.  The company notes that this is their:

vision of the near future for wearable computing user interfaces. By fusing information from an infrared and standard camera, nearly any surface can be transformed into a touch screen.

image

  • Also from Germany, the Fraunhofer Institute for Integrated Circuits announced a few days ago an app for Google Glass that analyzes what its camera sees and assesses the emotional state of the person in front of the Glass wearer.  It even takes a guess at the person’s age.  It’s an extension of their previous work, offered as SHORE technology.  There’s a video demonstrating this at http://youtu.be/Suc5B79qjfE

image

image

  • As part of its RISE basketball tour in China, Nike unveiled the LED basketball court to train athletes.  This new facility in Shanghai, called the House of Mamba, has motion sensors capturing the actions of the players and LED displays providing direction on the floor.  There’s a picture below, but to see it at work, you should watch the video at http://youtu.be/u2YhDQtncK8

image

  • In another example of the blending of the virtual and physical in urban environments, there’s Soofa’s urban hub.  As they describe it:

a solar-powered bench that provides you with free outdoor charging and location-based information like air quality and noise levels by uploading environmental sensor data to soofa.co. The smart urban furniture was developed by Changing Environments, a MIT Media Lab spin-off.

  • Toshiba Corporation announced that it will add a new dimension to its healthcare business by starting production of pesticide-free, long-life vegetables in a closed-type plant factory that operates under almost aseptic conditions … and will start shipping lettuce, baby leaf greens, spinach, mizuna and other vegetables in the second quarter of FY2014.  http://www.toshiba.co.jp/about/press/2014_05/pr1501.htm

That’s it for reports from around the globe.  Next week back to analysis and questions about where we’re headed.

© 2014 Norman Jacknis

[http://njacknis.tumblr.com/post/96538933986/technology-gets-more-personal]

New Energy?

This is the second of my August posts that review some interesting and unusual tech news items about various subjects I’ve blogged about before. 

Recently, there have been frequent announcements about developments in new and alternative, yet sustainable, energy. 

Among other developments in more efficient batteries than the traditional lithium ion battery, there is the Ryden battery, whose producer says it is both environmentally sustainable (carbon, not rare earths), supports an electric car with 300 mile range and charges 20 times faster than lithium ion batteries.  Their May announcement adds:

“Power Japan Plus today launched a new battery technology – the Ryden dual carbon battery. … The Ryden battery makes use of a completely unique chemistry, with both the anode and the cathode made of carbon. … [It is the] first ever high performance battery that meets consumer lifecycle demand, rated for more than 3,000 charge/discharge cycles.”

There’s a video explaining this more at http://youtu.be/mWPgnbRYNRM

image

And Modern Farmer magazine had a story this month about the use of store-bought spinach as fuel for cars.  But before kids tell mom that spinach is too valuable to be used as food, read on:

In a recent study, an international team of chemists and physicists have taken the first “snapshots” of photosynthesis in action—the process plants use to convert light into chemical energy. … In experiments recently documented in Nature, the scientists shield spinach leaves they buy at the market in a cool, protected room where a sun-like laser activates photosynthesis. …

Using lasers, X-rays, and some spinach, the team has created the first-ever images of the water-splitting process that leads to plant energy. …  Once scientists get a handle on exactly how photosynthesis happens, they’ll recreate it using other technology to create what’s called an “artificial leaf” which could convert solar rays into cheap, renewable fuel.

Finally, for situations that don’t require mobile power, there’s a new kind of wind turbine unveiled a couple of months ago by some Dutch engineers.  Unlike the blades we see in wind farms, this turbine uses a screw-pump design, originally conceived of by the ancient Archimedes – which is also the name of the firm that makes this product.

For under US $6,000, the company says its Liam Urban Wind Turbine is as much as three times more efficient than traditional wind energy, perhaps the most efficient wind turbine yet.  And it does all this without the also traditional whining noise.  It kind of looks like a big pinwheel.

There’s a good video – recorded from a drone – of one of these in operating at https://www.youtube.com/watch?v=H5t77JwkjUY

image

image

© 2014 Norman Jacknis

[http://njacknis.tumblr.com/post/95912557986/new-energy]

Expanding Communications?

In something of an annual August tradition, I’ll review some interesting tech news items about various subjects I’ve blogged about before.  This will be the first of a couple of posts and will focus on some of the developments that are expanding bandwidth both in capacity and in coverage.

Considering basic physics, there are theoretical limits to how much data can be sent over the air.  That has led many people, myself included, to think that wireless data would not be sufficient for the video and other high bandwidth applications that people have come to expect.  But the wireless phone companies have successfully increased their capacity for users over the last few years. 

And various technologists are developing even greater speeds for electronic communications.

A couple months ago, the Chinese company, Huawei has announced that it demonstrated in a lab, Wi-Fi with a 10 Gbps data transfer rate.   See

http://www.huawei.com/ilink/en/about-huawei/newsroom/press-release/HW_341651

And a few weeks ago, scientists at the Technical University of Denmark (DTU Fotonik) reached speeds of 43 terabits per second with a single laser, which beat the previous world-record of 26 terabits per second set at the Karlsruhe Institute of Technology in Germany).  See more at http://www.dtu.dk/english/News/2014/07/Verdensrekord-i-dataoverfoersel-paa-danske-haender-igen

image

They describe the significance of this achievement this way:

“The worldwide competition in data speed is contributing to developing the technology intended to accommodate the immense growth of data traffic on the internet, which is estimated to be growing by 40–50 per cent annually.

“What is more, emissions linked to the total energy consumption of the internet as a whole currently correspond to more than two per cent of the global man-made carbon emissions—which puts the internet on a par with the transport industry (aircraft, shipping etc.).

“However, these other industries are not growing by 40 per cent a year. It is therefore essential to identify solutions for the internet that make significant reductions in energy consumption while simultaneously expanding the bandwidth.

“This is precisely what the DTU team has demonstrated with its latest world record. DTU researchers have previously helped achieve the highest combined data transmission speed in the world—an incredible 1 petabit per second—although this involved using hundreds of lasers.”

While the speed limit of communications is dramatically expanding, there are, of course, many people in the world that still need basic broadband – even in rural areas of developed nations or anywhere that has been struck by a natural disaster which destroys the established communications network.  One of the ideas that I’ve suggested to them is the use of weather balloons and similar, flexible “instant” towers that go up much faster and cost considerably less than building traditional radio towers.

There was a twist to this idea in an announcement from the National Science Foundation a few weeks ago (http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=132161&org=NSF ):

“Yan Wan from the University of North Texas exhibited unmanned aerial vehicles (UAVs) she developed that are capable of providing wireless communications to storm-ravaged areas where telephone access is out.

“Typical wireless communications have a range limit of only a hundred meters, or about the length of a football field. However, using technology Wan and her colleagues developed, Wan was able to extend the Wi-Fi reach of drones to five kilometers, or a little more than three miles.”

The implications of this ever expanding communications capability are only beginning to be explored.  As an example, the NSF also noted:

“One day, Wan’s research will enable drone-to-drone and flight-to-flight communications, improving air traffic safety, coordination and efficiency.”

image

© 2014 Norman Jacknis

[http://njacknis.tumblr.com/post/95281362571/expanding-communications]

New Worldwide Robot Adventures

It’s summer and time to catch up on some interesting tech news.  This post is about robots going beyond their use in warehouses, factories or even as personal assistants – indeed, it’s about robots outdoors.

On the farm, in Australia, there’s the robotic LadyBird which

“was designed and built specifically for the vegetable industry with the aim of creating a ground robot with supporting intelligent software and the capability to conduct autonomous farm surveillance, mapping, classification, and detection for a variety of different vegetables.“

image

You can find out more at http://sydney.edu.au/news/84.html?newscategoryid=2&newsstoryid=13686, which also lets you know that its developer, University of Sydney robotics Professor Salah Sukkarieh, was named last month as the "Researcher of the Year” by the Australian Vegetable Industry association.

From robots working hard in the fields, let’s go to robots having some fun on the road – HitchBot, which is the invention of two Canadian computer scientists.  HitchBot plans to hitch rides across Canada this summer. 

image

As HitchBot says on its website:

“I am hitchBOT — a robot from Port Credit, Ontario.

“This summer I will be traveling across Canada, from coast-to-coast. I am hoping to make new friends, have interesting conversations, and see new places along the way. As you may have guessed robots cannot get driver’s licenses yet, so I’ll be hitchhiking my entire way. I have been planning my trip with the help of my big family of researchers in Toronto. I will be making my way from the east coast to the west coast starting in July.

“As I love meeting people and hearing stories, I invite you to follow my journey and share your hitchhiking stories with me as well. If you see me by the side of the road, pick me up and help me make my way across the country!”

Going from the ground to the air, in the realm of semi-robotic flight, otherwise known as drones, there’s a new one that reminds me of Star Wars Flying Speeder Bike – without the pilot.  One article describes this new drone from Switzerland as:

“an autonomous drone in a fully immersive rollcage that keeps it protected from whatever it might fly into — in this case, trees, but the robust safety of the thing means it might soon be perfectly applicable for combing disaster areas or any other tight spaces.”

image

image

Also from the end of last year, another drone was featured in a New Scientist article titled “Spider-drones weave high-rise structures out of cables”.  This one was also developed in Switzerland at the Swiss Federal Institute of Technology (ETH) in Zurich.

As the article notes:

The drones could make building much easier, says roboticist Koushil Sreenath at Carnegie Mellon University in Pittsburgh, Pennsylvania. "You just program the structure you want, press play and when you come back your structure is done,” he says. “Our current construction is limited, but with aerial robots those limitations go away.”

And these are just a few of the examples of robotics changing how we will get things done outdoors around the globe.

© 2014 Norman Jacknis

[http://njacknis.tumblr.com/post/91251932294/new-worldwide-robot-adventures]

What Culture Is Needed For A Virtual Workforce?

A few weeks ago, the New York Times had a story about HP and its telecommuters – “Back-to-Work Day at H.P.”  While not quite calling for an end to telecommuting as Yahoo done earlier this year, HP said they had added space and “invited” its employees back to the office.  Once again it seemed that a big tech company was doing a decidedly untech thing – downplaying the use of technology and pointing out how it can’t really substitute for old fashioned patterns of interaction.

How do tech companies expect people to believe them, if their words don’t match their actions?

While the current technology for virtual interactions and a virtual workforce can certainly be improved, it’s not the major obstacle anymore.  A more important part of the disconnect between words and actions is that these tech companies are engineering leaders, but not leaders in organizational culture – and it is culture that is the real hurdle here. 

Tech and non-tech companies that want to ensure success for their virtual workforce need to build an appropriate culture and practices. 

For example, everyone involved with telecommuting needs to understand that email, text, even phone calls constitute only a small part of the communications that human beings expect and is insufficient to support a high level of trust.  However, video chatting does enable people to get much of what would be communicated in person and has been shown to enhance trust.  So video ought to be the rule, not the exception, for virtual interaction.

Another important part of the culture of innovative companies is the encouragement of random interactions and collaboration among people.  This is what underlies the Three C’s which Tony Hsieh of Zappo’s emphasizes:  collision, community and co-learning.

He clearly believes that this is only possible in a physical environment.  But these three C’s can also be well supported in a virtual environment, if the company sets up that environment for such collisions and makes it a part of its everyday culture.  Indeed, the range of people who can interact easily in the virtual workforce is much greater than in a physical office.

The company also needs to ensure that telecommuters don’t feel their chance of career advancement is dramatically diminished unless they show up at the office and hobnob with the right executives.  The article “Creating an Organizational Culture that Supports Telework” relates a good example of this situation, along with good general guidance on the positive actions that companies need to take.

In sum, as James Surowiecki wrote earlier this year in the New Yorker:

 “At companies with healthier corporate cultures, it [telecommuting] often works well, and [former head of Xerox PARC] Seely Brown has shown how highly motivated networks of far-flung experts  — élite surfers, say — use digital technologies to transmit knowledge much as they would in person.”

Building a 21st century culture of successful virtual interaction won’t come easily to companies that developed their more traditional culture in the 20th century.  But in an increasingly virtual and mobile world, it will be necessary for the HPs, Yahoos, and others to flourish.

© 2013 Norman Jacknis

[http://njacknis.tumblr.com/post/69691956542/what-culture-is-needed-for-a-virtual-workforce]