Rules Of The Road For Robots

When we drive in our cars, we mostly have a sense of common rules for the road to keep us all safe. Now that we begin to see driverless cars, there are similar issues for the behavior of those cars and even ethical questions.  For example, in June, the AAAS’s Science magazine reported on a survey of the public’s attitudes in answer to the story’s title: “When is it OK for our cars to kill us?

Driverless cars are just one instance of the gradual and continuing improvement in artificial intelligence which has led to many articles about the ethical concerns this all raises. A few days ago, the New York Times had a story on its website about “How Tech Giants Are Devising Real Ethics for Artificial Intelligence”, in which it noted that “A memorandum is being circulated among the five companies with a tentative plan to announce the new organization in the middle of September.”

Of course, this isn’t all new. About 75 years ago, the author Isaac Asimov formally introduced his famous Three Laws of Robotics:

1.     A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.    A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

image

Even before robots came along, ethics was focused on the interactions between people and how they should not harm and conflict with each other – “do unto others …”. As artificial intelligence becomes a factor in our world, many people feel the need to extend this discussion to robots.

These are clearly important issues to us, human beings. Not surprisingly, however, these articles and discussions have a human-centric view of the world.

Much less – indeed very little – consideration has been given to how artificial intelligence agents and robots interact with each other. And we don’t need to wait for self-aware or superhuman robots to consider this.

Even with billions of not so intelligent devices that are part of the Internet of Things, problems have arisen.

image

This is, after all, an environment in which the major players haven’t yet agreed on basic standards and communications protocols between devices, never mind how these devices should interact with each other beyond merely communicating.  

But they will interact somehow and they will become much more intelligent – embedded AI. Moreover, there will be too many of these devices for simple human oversight, so instead, at best, oversight will come from other machines/things, which in turn will be players in this machine-to-machine world.

The Internet Society in its report on the Internet of Things last year at least began to touch on these concerns.

Stanford University’s “One Hundred Year Study” and its recently released report “ARTIFICIAL INTELLIGENCE AND LIFE IN 2030” also draws attention to the challenges that artificial intelligence will pose, but it too could focus more on the future intelligent Internet of Things.

As the inventors and producers of these things that we are rapidly connecting, we need to consider all the ways that human interactions can go wrong and think about the similar ways machine to machine interactions can go wrong. Then, in addition to basic protocols, we need to determine the “rules of the road” for these devices.

Coming back full circle to the impact on human beings, we will be affected if the increasingly intelligent, machine-to-machine world that we depend on is embroiled in its own conflicts. As the Kenyan proverb goes (more or less):

image

“When elephants fight, it is the grass that suffers.”

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/150075381291/rules-of-the-road-for-robots]