Run Of The River?

Dams that produce hydropower have been one of the longest established renewable energy sources in the US for a long time. The American industrial revolution started in places, like Massachusetts, with abundant free flowing rivers that were tapped for their energy to power early factories.

Hydropower is still the largest source of renewable energy, accounting for a bit under half of the total.

A few years ago, I was involved with a project that was intended to revive one of those early industrial cities, Holyoke, Massachusetts. The city still had one of the few operating dams left and it supplied local electric power at a significant discount compared to elsewhere in the state. So the idea developed of creating local jobs by building a data center in Holyoke as a remote cloud location for major universities and businesses in the Boston area. (Driving distance between the two is about 90 miles.)

Putting aside whether a data center can be a significant job creator like old-time car plants, it struck me that the state as a whole would benefit by using the water resources there, thus bringing down a relatively high cost for electricity in a digital age. Of course, river resources are present in many other states, particularly east of the Mississippi River and in the northwest.

Thus, at one meeting with representatives of the research facilities of Harvard and MIT, I asked a simple question. When was the last time that engineering or science researchers took a serious look at using better materials or designs to improve the efficiency of the turbines that the water flows through or finding replacements for turbines (like the VIVACE hydrokinetic energy converter shown here)?

image

Despite or maybe because of the Three Gorges Dam project in China and similar projects, hydropower from dams has diminished in popularity in the face of various environmental concerns. Yet the rivers still flow and contain an enormous amount of energy and giant dams don’t have to be the only way to capture that energy.

With that in mind, I also asked if they had looked at the possibility of designing smaller turbines so that smaller rivers could be tapped without traditional dams. Some variations of this idea are called “run of the river”. (Because of the variability of river flows, this version of hydropower doesn’t produce a consistent level of energy like a coal-burning plant. As with other renewables, it too will need more efficient and cost-effective means of storing electricity – batteries, super-capacitors, etc.)

The quizzical stares I received could most diplomatically be translated as “Why would we do that?  Hydraulic engineering is centuries old and has been well established”. However, the sciences of materials and fluid dynamics is dramatically better now than it was even seventy or a hundred years ago and it calls for a much stepped up effort in new hydraulic engineering than has taken place. Periodically, the experts publicly say this as in “Hydraulic engineering in the 21st century: Where to?

As it turned out, a year or two later in 2011/2012, there was a peak of activity in hydropower experiments in the UK, Germany, Canada, Japan, and India. Here are just some of the more interesting examples:

·        Halliday Hydropower’s Hydroscrew

image

·        The Hydro Cat, free floating

image

·        Blue Freedom’s “world’s smallest hydropower plant” is intended primarily for small mobile devices as their slogan says “1 hour of Blue Freedom in the river. 10 hours of power for your smartphone”

image

·        In an unusual twist on this topic, Lucid Energy harnessed the power of water flowing through urban pipes.

image

These were interesting prototypes, experiments and small businesses, but without the kind of academic and financial support seen in the IT industry, these don’t seem to have the necessary scale to make an impact – notwithstanding the release two months ago of a Hydropower vision paper by the US Department of Energy. I’d love to be corrected on this observation.

Perhaps this is another example of a disruptive technology, in the way that its creator, Clayton Christensen, originally defined the term. Disruptive technologies start to be used at the low end of the market where people have few or no other choices – places like India and the backcountry of advanced economies which are poorly served by the electrical grid, if at all. Only later, possibly, will these products be able to go upmarket.

Too much of the discussion about disruptive technologies has been limited to information technology. There can be disruptive technologies in other fields to solve problems that are just as important, perhaps more important, than the ones that app programmers solve – like renewable energy.

Only time will tell if the technology and markets develop sufficiently so that run of the river and similar hydropower becomes one of the successful disruptive technologies.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/151053869979/run-of-the-river]

The Internet Of A Hundred Years Ago

In many of my presentations, I have pointed out that the Internet is still very much in its early stages. There are tremendous gaps in the availability of high speed, low latency Internet everywhere. It will only be at some point in the future that we could truly expect to have a visual conversation with almost anyone, almost anywhere on the globe.

Beyond expanding connectivity, there are other factors standing in the way of ubiquitous high quality visual communications.

First, the software – the interface that users have to deal with – is quite awkward. There are still too many instances where software, like Skype, just doesn’t work well or freezes or otherwise discourages people from everyday use.

Second, more important, the mindset or culture of users seems not to have changed yet to readily accommodate visual conversations over the Internet everywhere. You surely know someone who just doesn’t want to communicate this way. There used to be many people who thought the telephone shouldn’t replace face-to-face meetings and trying to do so was rude and/or too expensive.

Indeed, I use a rough parallel that we are today with the Internet about where we were with the telephone at the end of the 1920s. That was more than fifty years after the telephone had been invented. Of course, we’re not even fifty years into the life of the Internet.

Although the parallel between phone network and Internet is fairly obvious, it is enlightening or amusing to see history repeat itself. Here is a 1916 advertisement that hails how the telephone is “annihilating both time and space” – what we’ve also heard in more recent years about the Internet.

image

While there were many articles written at the time about the impact of telephones on society, the economy and life, even in the 1920s (or 30s or 40s or 50s …), telephone usage was not taken for granted. Among other things, long distance calling was not widely considered something most people would do.

image

Mobile telephony was discussed but not really in existence yet.

image

There was even a product that anticipated today’s Twitter and similar feeds – or maybe it was just a concept for a product, since vaporware was around even a hundred years ago.

image

The chart below shows the pattern of historical adoption of telephones in the US from 1876 until 1981.

image

From the perspective of 1981, never mind 2016, the first fifty years of telephony were the early age.

And since 1981? We’ve seen mobile phones overtake land lines in worldwide usage and become much more than devices for just talking to people.

So imagine what the next 100 years of Internet development will bring.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/150726984612/the-internet-of-a-hundred-years-ago]

[note this is an updated version of an earlier post in the beginning of 2014]

Eye Tech

Last month, I wrote about Head Tech – technology that can be worn on the head and used to control the world about us. Most of those products act as an interface between our brain waves and devices that are managed by computers that are reading our brain waves.

The other related area of Head Tech recognizes the major role of our eyes literally as windows to the world we inhabit.

Google may have officially sidelined its Glass product, but its uses and products like it continue to be developed by a number of companies wanting to demonstrate the potential of the idea in a better way than Google did. There are dozens of examples, but, to start, consider these three.

Carl Zeiss’s Smart Optics subsidiary accomplished the difficult technical task of embedding the display in what looks to everyone like a pair of regular curved eyeglasses. Oh, and they could even be glasses that provide vision correction. Zeiss is continuing to perfect the display while trying to figure out the business challenge of bringing this to market.

image

You can see a video with the leader of the project at https://youtu.be/MSUqt8M0wdo and a report from this year’s CES at https://www.youtube.com/watch?v=YX5GWjKi7fc

Also offering something that looks like regular glasses and is not a fashion no-no is LaForge Optical’s Shima, which has an embedded chip so it can display information from any app on your smartphone. It’s in pre-order now for shipment next year, but you can see what they’re offering in this video. A more popular video provides their take on the history of eye glasses.

While Epson is not striving to devise something fashionable, it is making its augmented reality glasses much lighter. This video shows the new Moverio BT-300 which is scheduled to be released in a few months.

Epson is also tying these glasses to a variety of interesting, mostly non-consumer, applications. Last week at the Interdrone Conference, they announced a partnership with one of the leading drone companies, DJI, to better integrate the visuals coming from the unmanned aerial camera with the glasses.

DAQRI is bringing to market next month an updated version of its Smart Helmet for more dangerous industrial environments, like field engineering.  Because it is so much more than glasses, they can add all sorts of features, like thermal imaging. It is a high end, specialized device, and has a price to match.

At a fraction of that price, Metavision has developed and will release “soon” its second generation augmented reality headset, the Meta 2. Its CEO’s TED talk will give you a good a sense of Metavision’s ambitions with this product.

image

Without a headset, Augmenta has added recognition of gestures to the capabilities of glasses from companies like Epson. For example, you can press on an imaginary dial pad, as this little video demonstrates.

This reminds me a bit of the use of eye tracking from Tobii that I’ve included in presentations for the last couple of years. While Tobii also sells a set of glasses, their emphasis is on tracking where your eyes focus to determine your choices.

One of the nice things about Tobii’s work is that it is not limited to glasses. For example, their EyeX works with laptops as can be seen in this video. This is a natural extension of a gamer’s world.

Which gets us to a good question: even if they’re less geeky looking than Google’s product, why do we need to wear glasses at all? Among other companies and researchers, Sony has an answer for that – smart, technology-embedded contact lenses. But Sony also wants the contact lens to enable you to take photos and videos without any other equipment, as they hope to do with a new patent.

So we have HeadTech and EyeTech (not to mention the much longer established EarTech) and who knows what’s next!

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/150401402048/eye-tech]

Rules Of The Road For Robots

When we drive in our cars, we mostly have a sense of common rules for the road to keep us all safe. Now that we begin to see driverless cars, there are similar issues for the behavior of those cars and even ethical questions.  For example, in June, the AAAS’s Science magazine reported on a survey of the public’s attitudes in answer to the story’s title: “When is it OK for our cars to kill us?

Driverless cars are just one instance of the gradual and continuing improvement in artificial intelligence which has led to many articles about the ethical concerns this all raises. A few days ago, the New York Times had a story on its website about “How Tech Giants Are Devising Real Ethics for Artificial Intelligence”, in which it noted that “A memorandum is being circulated among the five companies with a tentative plan to announce the new organization in the middle of September.”

Of course, this isn’t all new. About 75 years ago, the author Isaac Asimov formally introduced his famous Three Laws of Robotics:

1.     A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

image

Even before robots came along, ethics was focused on the interactions between people and how they should not harm and conflict with each other – “do unto others …”. As artificial intelligence becomes a factor in our world, many people feel the need to extend this discussion to robots.

These are clearly important issues to us, human beings. Not surprisingly, however, these articles and discussions have a human-centric view of the world.

Much less – indeed very little – consideration has been given to how artificial intelligence agents and robots interact with each other. And we don’t need to wait for self-aware or superhuman robots to consider this.

Even with billions of not so intelligent devices that are part of the Internet of Things, problems have arisen.

image

This is, after all, an environment in which the major players haven’t yet agreed on basic standards and communications protocols between devices, never mind how these devices should interact with each other beyond merely communicating.  

But they will interact somehow and they will become much more intelligent – embedded AI. Moreover, there will be too many of these devices for simple human oversight, so instead, at best, oversight will come from other machines/things, which in turn will be players in this machine-to-machine world.

The Internet Society in its report on the Internet of Things last year at least began to touch on these concerns.

Stanford University’s “One Hundred Year Study” and its recently released report “ARTIFICIAL INTELLIGENCE AND LIFE IN 2030” also draws attention to the challenges that artificial intelligence will pose, but it too could focus more on the future intelligent Internet of Things.

As the inventors and producers of these things that we are rapidly connecting, we need to consider all the ways that human interactions can go wrong and think about the similar ways machine to machine interactions can go wrong. Then, in addition to basic protocols, we need to determine the “rules of the road” for these devices.

Coming back full circle to the impact on human beings, we will be affected if the increasingly intelligent, machine-to-machine world that we depend on is embroiled in its own conflicts. As the Kenyan proverb goes (more or less):

image

“When elephants fight, it is the grass that suffers.”

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/150075381291/rules-of-the-road-for-robots]