There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.
In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.
Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.
Robots are also showing off their fun and increasingly creative side.
Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.
Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc
Music
Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.
The Flow Machines project in Europe has also been using AI to create music in the style of more famous composers. See, for instance, its DeepBach, “a deep learning tool for automatic generation of chorales in Bach’s style”. https://www.youtube.com/watch?time_continue=2&v=QiBM7-5hA6o
Singing
Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s
There is even AI that will generate poetry – um, song lyrics.
Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:
Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.
Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.
Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.
An altered state of manic episodes,
A journey through the long and winding roads.
Not exactly upbeat, but you could well imagine this being a song too.
I was asked to speak at the Cambridge (MA) Public Library’s annual staff development day a week ago Friday. Cambridge is one of the great centers of artificial intelligence (AI), machine learning and robotics research. So, naturally, my talk was in part about AI and libraries, going back to another post a year ago, titled “Free The Library”.
First, I put this in context – the increasing percentage of people who are part of the knowledge economy and have a continual need to get information, way beyond their one-time college experience. They are often dependent on googling what they need with increasingly frustrating results, in part because of Google’s dependence on advertising and keeping its users’ eye on the screen, rather than giving them the answers they need as quickly as possible.
Second, I reviewed the various ways that AI is growing and is not just robots handling “low level” jobs. Despite the portrait of AI’s impact as only affecting non-creative work, I gave examples of Ross “the AI lawyer”, computer-created painting, music and political speeches. I showed examples of social robots with understanding of human emotions and the kind of technologies that make this possible, such as RealEyes and Affectiva (another company near Cambridge).
Third, I pointed out how these same capabilities can be useful in enhancing library services.
This isn’t really new if you think about it. Libraries have always invented ways to organize information for people, but the need now goes beyond paper publications. Since their profession began, librarians have been in the business of helping people find exactly what they need, but the need is greater now.
Librarians have many skills to add to the task of “organizing the world’s information, and making it universally accessible”. But as non-profit organizations interested in the public good, libraries can also ensure that the next generation of knowledge tools – surpassing Google search – is developed for non-commercial purposes.
Not surprisingly, of course, the immediate reaction of many librarians is that we don’t have the money or resources for that!!
I reminded them of the important observation by Maureen Sullivan, former President of the American Library Association:
“With a nationally networked platform, library and other leaders will also have more capacity to think about the work they can do at the national level that so many libraries have been so effective at doing at the state and local levels.”
Thus, together, libraries can develop new and appropriate knowledge tools. Moreover, they can – and should cooperate – in a way that is usually off-limits to private companies who want to protect their “intellectual property.”
And the numbers are impressive. If the 70,000+ librarians and 150,000+ other library staff in the USA worked together, they could accomplish an enormous amount. Individual librarians could specialize in particular subjects, but be available to patrons everywhere.
And if they worked with academic specialists in artificial intelligence in their nearby universities, such as MIT and Harvard in the case of Cambridge Public Library, they can help lead, not merely react quickly, to the future in a knowledge century.
The issue is not “either AI or libraries” but both reinforcing each other in the interest of providing the best service to patrons. Instead of being purely AI, artificial intelligence, this new service would what is beginning to be a new buzzword – IA, intelligence augmentation for human beings.
Nor am I the only one talking this way. Consider just three examples of articles in March and May of this year that come from different worlds, but make similar points about this proposed marriage:
I ended my presentation with a call for action and a reminder that this is a second – perhaps last – chance for libraries. The first warning and call to action came more than twenty years ago in 1995 at the 61st IFLA General Conference. It came from Chris Batt, then of the Croydon Libraries, Museum and Arts in the UK:
“What are the implications of all this [advancing digital world] for the future of public libraries? … The answer is that while we cannot be certain about the future for our services, we can and should be developing a vision which encompasses and enriches the potential of the Internet. If we do not do that, then others will; and they will do it less well.”
One of the more interesting technologies that has been developing is called affective computing. It’s about analyzing observations of human faces, voices, eye movements and the like to understand human emotions — what pleases or displeases people or merely catches their attention. It combines deep learning, analytics, sensors and artificial intelligence.
While interest in affective computing hasn’t been widespread, it may be nearing its moment in the limelight. One such indication is that the front page of the New York Times, a couple of days ago, featured a story about its use for television and advertising. The story was titled “For Marketers, TV Sets Are an Invaluable Pair of Eyes.”
But the companies that were featured in the Times article are not the only ones or the first ones to develop and apply affective computing. IBM published a booklet on the subject in 2001. Before that, in 1995, the term “affective computing” was coined by Professor Rosalind Picard of MIT, who also created the affective computing group in the MIT Media Lab.
In a video, “The Future of Story Telling”, she describes what is essentially the back story to the New York Times article. In no particular order, among other companies working with this technology today, there are Affectiva, Real Eyes, Emotient, Beyond Verbal, Sension, tACC, nVisio, CrowdEmotion, PointGraB, Eyeris, gestigon, Intel RealSense, SoftKinetic, Elliptic Labs, Microsoft’s VIBE Lab and Kairos.
Affectiva, which Professor Picard co-founded, offers an SDK that reads emotions of people at home or in the office just by using web cams. Here’s a video that shows their commercially available product at work: https://www.youtube.com/watch?v=mFrSFMnskI4
The
two previous products have obvious application to web marketing and
content. So much so, that some predict a future in which affective
technology creates an “emotion economy”.
But affective computing
has longer term applications, most especially in robotics. As human-like
robots, especially for an aging population in Asia, begin to be sold as
personal assistants and companions, they will need to have the kind of
emotional intelligence about humans that other human beings mostly have
already. That’s likely to be where we will see some of the most
impactful uses of affective computing.
Over the last couple of
years, Japan’s Softbank has developed Pepper, which they describe as a
“social robot” since it aims to recognize human emotion and shows its
own emotions. Here’s the French software company behind Pepper — https://www.youtube.com/watch?v=nQFgGS8AAN0
There
are others doing the same thing. At Nanyang Technological University,
Singapore, another social robot, called Nadine, is being developed. See https://www.youtube.com/watch?v=pXg33S3U_Oc
Both
these social robots and affective computing overall still needs much
development, but already you can sense the importance of this
technology.
When we drive in our cars, we mostly have a sense of common rules for the road to keep us all safe. Now that we begin to see driverless cars, there are similar issues for the behavior of those cars and even ethical questions. For example, in June, the AAAS’s Science magazine reported on a survey of the public’s attitudes in answer to the story’s title: “When is it OK for our cars to kill us?”
Driverless cars are just one instance of the gradual and continuing improvement in artificial intelligence which has led to many articles about the ethical concerns this all raises. A few days ago, the New York Times had a story on its website about “How Tech Giants Are Devising Real Ethics for Artificial Intelligence”, in which it noted that “A memorandum is being circulated among the five companies with a tentative plan to announce the new organization in the middle of September.”
Of course, this isn’t all new. About 75 years ago, the author Isaac Asimov formally introduced his famous Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Even before robots came along, ethics was focused on the interactions between people and how they should not harm and conflict with each other – “do unto others …”. As artificial intelligence becomes a factor in our world, many people feel the need to extend this discussion to robots.
These are clearly important issues to us, human beings. Not surprisingly, however, these articles and discussions have a human-centric view of the world.
Much less – indeed very little – consideration has been given to how artificial intelligence agents and robots interact with each other. And we don’t need to wait for self-aware or superhuman robots to consider this.
Even with billions of not so intelligent devices that are part of the Internet of Things, problems have arisen.
This is, after all, an environment in which the major players haven’t yet agreed on basic standardsand communications protocols between devices, never mind how these devices should interact with each other beyond merely communicating.
But they will interact somehow and they will become much more intelligent – embedded AI. Moreover, there will be too many of these devices for simple human oversight, so instead, at best, oversight will come from other machines/things, which in turn will be players in this machine-to-machine world.
The Internet Society in its report on the Internet of Things last year at least began to touch on these concerns.
Stanford University’s “One Hundred Year Study” and its recently released report “ARTIFICIAL INTELLIGENCE AND LIFE IN 2030” also draws attention to the challenges that artificial intelligence will pose, but it too could focus more on the future intelligent Internet of Things.
As the inventors and producers of these things that we are rapidly connecting, we need to consider all the ways that human interactions can go wrong and think about the similar ways machine to machine interactions can go wrong. Then, in addition to basic protocols, we need to determine the “rules of the road” for these devices.
Coming back full circle to the impact on human beings, we will be affected if the increasingly intelligent, machine-to-machine world that we depend on is embroiled in its own conflicts. As the Kenyan proverb goes (more or less):
“When elephants fight, it is the grass that suffers.”
In our post-industrial, Internet world, an ever increasing percentage
of the population has an ever increasing need for knowledge to make a
living. This is why people have used the Internet’s search engines so
much, despite being frequently frustrated by the volume and irrelevance
of search results. They may also be suspicious of the bias and
commercialism built into the results. Most of all, people intuitively
grasp that search results are not the same thing as the knowledge they
really want.
Thus, if I had to point to a single service that
would dramatically raise the economic importance of libraries in this
century, it would be satisfying this need in a substantive and objective
way.
Yet, if you go to most dictionaries, you’ll find a
definition of a library like this one from the Oxford Dictionary:
“A
building or room containing collections of books, periodicals, and
sometimes films and recorded music for people to read, borrow, or refer
to”.
While few people would say that libraries
shouldn’t provide books, as long as people want them, most librarians
would point to the many services they have provided beyond collecting
printed material.
Nevertheless, the traditional definition
continues to limit the way too many librarians think. Even among those
who object to the narrow definition in the dictionary, these two
traditional assumptions about libraries are usually unquestioned:
Library services are mostly delivered in a library building.
Library services are mostly delivered by human beings.
My
argument here is simple: If libraries are to meet the public needs of a
21st century knowledge economy, librarians must lift these self-imposed
constraints. It is time to free the library and library services!
This
isn’t as radical as it sounds. If we look deeper, more conceptually,
at what has gone on in libraries, libraries services are about the
community’s reserve of knowledge and sharing of information — and
helping members of the community find what they need quickly, accurately
and without bias. I’m proposing nothing different, except expanding
the ways that libraries do this job.
The first of these two
assumptions is the simplest one to abandon. Although the library
building remains the focus for many in the profession, in various ways,
virtual services are available through the web, chat, email or even Skype. (I’ve written
before about the ways that library reference services could become
available anywhere and be much improved through a national
collaboration.)
The second assumption – the necessity for a human librarian at almost all points of service — will be a tougher one to discard.
Consider,
though, one of the most important of the emerging, disruptive
technologies – artificial intelligence and machine learning – which can
supplement and enhance the ability of librarians to deliver information
services well and at a scale appropriate for the large demand.
My hope is that, working with software and artificial intelligence experts, librarians
will start creating machine learning and artificial intelligence
services that will make in-depth, unbiased knowledge guidance and
information reference universally available.
Doing that
successfully as a national project will enable the library as an
institution, if not a building, to reclaim its role as information
central for people of all ages.
During
the last several years, there have been a few experiments in using
artificial intelligence to supplement reference services provided by
human librarians. In the UK, the University of Wolverhampton offers its
“Learning & Information Services Chatbot”.
A few weeks ago, the Knight News Challenge selected the Charlotte Mecklenberg Public Library’s DALE project with IBM Watson and described it as “the first AI enabled search portal within a public library setting.”
In a note that is very much in accord with my argument, they wrote:
“Libraries
are the unsung heroes of the Information Age. In a world where
everyone Googles for the right answer, many are unaware of the wealth of
information that libraries have within their physical and digital
collections.… DALE would be able to analyze the structured and
unstructured data hidden within the public library’s vast collections,
helping both staff and customers locate the information needed within
one search setting.”
Despite the needs of library patrons, so far these examples are still rare for a couple of reasons.
Some
people argue that libraries shouldn’t and maybe can’t compete with the
big corporations, like Apple and Google, in helping people find the
knowledge they need. As I’ve already noted above, many users experience
these commercial services as a poor substitute for what they want.
In
any case, abdicating its own responsibility is a disservice to library
patrons and the public who have looked to libraries for objective,
non-commercial information services for a very long time.
There is
also a fear that wider use of artificial intelligence to help provide
library services might put human librarians out of work. While that is
not a concern that librarians generally discuss publicly, Steven Bell,
Associate University Librarian of Temple University, wrote last month in
Library Journal about this very subject – the potential for artificial
intelligence to diminish the need for librarians. He called it the “Promise and Peril of AI for Academic Librarians”, although the article seemed to focus more on the peril.
This
is the fear of every worker faced with the onslaught of technology and
the resulting prospect of delivering more output in fewer hours. With
artificial intelligence and related robotics, workers in industries
where demand is not accelerating – like cars – may very well have
something to worry about.
But the reality for librarians is
different. The demand for information services is accelerating so that
even in the face of greater productivity per person, employment
prospects shouldn’t diminish.
Indeed, if these library services
become real and gain traction, increasing demand for them and for the
librarians that make them possible will also increase because the
knowledge creates a demand for new knowledge. To use an ungainly and
somewhat distasteful analogy, it is like an arms race.
My concern
is neither about corporate competition nor unemployment. Rather my fear
is that the library profession will not easily abandon its self-imposed
limitations and will not expand its presence and champion new
technology for its services. If those limitations remain, the public –
having been forced to go elsewhere to meet their needs – will in the end
devalue and reduce their support for libraries.
At the end of the year, there are many top 10 lists of the best
movies, best books, etc. of the year. Here’s my list of the best
non-fiction books I’ve read this year. But it has only eight books and
some were published earlier than this year since, like the rest of you,
I’m always behind in my reading no matter how many books, articles, and
blogs I read.
Although some are better than others, none of these
books is perfect. What book is perfect? But they each provide the reader
with a new way of looking at the world, which in turn is, at a minimum,
thought provoking and, even better, helps us to be more innovative.
I’ve
highlighted the major theme of each, but these are books that have many
layers and depth so my summary only touches on what they offer.
Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan
We’ve
had a few scary books out this past year or so about how robots are
going to take our work from us and enslave us. Kaplan’s brilliant book,
published this year, is much more nuanced and sophisticated. It is not
just “ripped from today’s headlines”. Instead, Kaplan provides history
and deep context. Especially interesting is his discussion of the legal
and ethical issues that arise when we use more of these
artificially-intelligent devices.
Creating the Learning Society by Joseph Stiglitz & Bruce Greenwald
Joseph
Stiglitz, the Nobel Prize winning economist, has been better known for
“The Great Divide: Unequal Societies and What We Can Do About Them”
which was published this year and is a sequel to his earlier book on the
subject, “The Price of Inequality” (2012). While those deal with the
important issue of economic inequality, at this point, that’s not news
to most of us.
Less well known, if more rigorous as a work of
economics, is his 2013 book “Creating the Learning Society”. With all
the talk about the importance of lifelong learning and innovation to
succeed in the economy of this century, there have been few in-depth
analyses of how that translates into economic growth and greater
incomes. Nor has there been much about what are the appropriate
government policies to have a modern economy to grow. Stiglitz provides
both in this book.
The End Of College: Creating the Future of Learning and the University of Everywhere by Kevin Carey
Talking
about lifelong learning, I found this book thought-provoking,
especially as a college trustee. Published this year, the rap on it is
that it’s all about massive open online courses (MOOCs), but it is
actually about much more than that. It provides a good history of the
roles that colleges have been asked to play and describes a variety of
ways that many people are trying to improve the education of students.
BiblioTech by John Palfrey
John
Palfrey was the patron of Harvard Law School’s Library Lab, one of the
nation’s leading intellectual property experts and now chairman of the Digital Public Library of America,
among other important positions. BiblioTech, which was published
earlier this year, describes a hopeful future for libraries – including a
national network of libraries. (Readers of this blog won’t be surprised
that Palfrey and I share many views, although he put these ideas all
together in a book and, of course, elaborated on them much more than I
do in these relatively short posts.)
Too Big To Know by David Weinberger
About
five years ago, I got to work a bit with David Weinberger when he was
one of the leaders of the library innovation lab at Harvard Law School,
in addition to his work at Harvard’s Berkman Center. When I was
introduced to the library lab’s ambitious projects, I joked with David
that his ultimate ambition was to do nothing less than organize all of
the world’s knowledge for the 21st century. This book, which was
published a year later is, I suppose, a kind of response to that
thought.
My reading of Weinberger’s big theme is that we can no
longer organize the world’s knowledge completely. The network itself has
the knowledge. As the subtitle says: now “the smartest person in the
room is the room” itself. Since not all parts of the network are
directly connected, there’s also knowledge yet to be realized.
Breakpoint by Jeff Stibel
Despite
the overheated subtitle this book, this book, published in 2013, is
somewhat related to Weinberger’s book in that it focuses on the network.
Using analogies from ant colonies and the neuron network of the human
mind, Stibel tries to explain the recent past and the future of the
Internet. As the title indicates, a key concept of the book is the
breakpoint – the point at which the extraordinary growth of networks
stops and its survival depends upon enrichment, rather than attempts at
continuing growth. As a brain scientists, he also argues that the
Internet, rather than any single artificially intelligent computer, is
really the digital equivalent of the human brain.
Previously I’ve devoted whole posts to two other significant books. Just follow the links below:
There has been great interest in robots that seem to act like human, not just in the movies, but also in technology news. So much so, that the big debate in the robot world would seem to be how much we should program our robots to be like us.
Previously, I’ve blogged about machines that create art, poetry and even news reports. While those are all intellectual exercises that people might think “smart” machines could do, there are also robots from Japan, of course, that can dance — maybe break dance — as you can see in this video from earlier this year.
(It’s worth noting that much of this leading edge robotics of this
kind is coming from Japan, perhaps in the face of a declining and aging
human population.)
Murata
has made dancing robotic cheerleaders, albeit to show how to control
and coordinate robots and not necessarily to set the dancing world on
fire. They too have a video to demonstrate the point.
Some Canadians sent a robot, called Hitchbot, hitchhiking, like a
college student seeing the world for the first time. More than a year
ago, I blogged
about its trip across Canada. Then two months ago, there were several
reports about how sad it was that HitchBot was beheaded by the criminal
elements who supposedly control the streets of Philadelphia at night.
The New York Times’s poignant headline was “Hitchhiking Robot, Safe in Several Countries, Meets Its End in Philadelphia”.
In any event, to make up for the loss of HitchBot, other Philadelphians built Philly Love Bots. Radio station WMMR promoted
their own version called Pope-Bot, in anticipation of the trip by Pope
Francis. It has survived the real Pope’s trip to Philly and has even
traveled around that area without incident.
Consider also sports, which has featured humans in contests with each
other for thousands of years – albeit aided, more recently, by very
advanced equipment and drugs.
Apparently, some folks now envision
sports contests fought by robots doing things humans do, but only
better. Cody Brown, the designer known for creating the visual
storytelling tool Scroll kit, sees a different kind of story. In TIME
Magazine, he suggested seven reasons “Why Robotic Sports Will One Day Rival The NFL”.
We also want robots to provide a human touch. Thinking of the needs of the elderly, RIKEN
has developed “a new experimental nursing care robot, ROBEAR, which is
capable of performing tasks such as lifting a patient from a bed into a
wheelchair or providing assistance to a patient who is able to stand up
but requires help to do so.”
The research staff at the Google Brain project have been developing a
chatbot that can have normal conversations with people, even on
subjects that don’t lend themselves to factual answers to basic
questions that are the staple of such robotic services – subjects like
the meaning of life. The chatbot learned a more human style by ingesting and analyzing an enormous number of conversations between real people.
Of
course, the desire to make robots and their ilk too much like humans
can backfire. Witness the negative reaction to Mattel’s Talking Barbie.
Indeed,
there are benefits if we don’t try to make robots in our human image –
although doing so might make us feel less like gods 🙂
At
Carnegie-Mellon, researchers decided that maybe it didn’t make sense to
put “eyes” on a robot’s head, the way human bodies do. As they announced a few days ago, instead, they put the eyes into the robot’s hands and that made the fingers much more effective.
We
ought to consider that, with ever growing intelligence, eventually
robots will figure it all out themselves. Researchers at Cambridge
University and the University of Zurich have laid the groundwork by
developing a robotic system
that evolves and improves its performance. The robotic system then
changes its own software so that the next generation is better.
As the lead researcher, Dr. Fumiya Iida, said:
“One
of the big questions in biology is how intelligence came about – we’re
using robotics to explore this mystery … we want to see robots that are
capable of innovation and creativity.”
And where that
leads will be unpredictable, except that it isn’t likely the robots will
improve themselves by copying everything we humans do.
There have been all kinds of fun new ways that technology has become embedded into cars to help drivers.
Last Friday, the New York Times had an article about Audi’s testing what might be described as a driver-assisted race car, going 120 miles per hour.
Just last month, Samsung demonstrated a way to see through truck on country roads in Argentina. It was intended to help a driver know when it’s safe to pass and overtake the truck. But, even those of us who get stuck in massive urban traffic jams, would love the ability to see ahead. (See the picture above.)
Another version of the same idea was developed and unveiled last month by the Universitat Politècnica de València, Spain. They call their version EYES and you can see a report about it at https://youtu.be/eUQfalxPK0o
There have been variations on this theme over the last year or so, but so far the deployment of the technology hasn’t happened on real roads for regular drivers.
But Ford Motor Company announced a couple of weeks ago that it will start to equip a car this year with split view cameras that let drivers see around corners. They say it’s especially useful when backing into traffic. This is supposed to be a feature of their worldwide fleet of cars by 2020.
In the old days, when a driver had to maneuver into a tight corner, he/she asked a friend to stand outside the car and provide instructions. Now, Land Rover is helping the driver who is alone – without friends? – to get a better view and control the car at the same time by using a smart phone app.
Is this all a good thing? The New York Times had this quote in its Audi story:
“At this point, substantial effort in the automotive community is focused on developing fully autonomous driving technology,” said Karl Iagnemma, an automotive researcher at M.I.T. “Far less effort is focused on developing methods to allow a driver to intuitively and safely interact with the highly automated driving vehicle.”
Nevertheless, while these features are surely helpful, on balance, they seem to me to be transitional technologies. (Allen Wirfs-Brock provided this helpful slide on the subject.)
A good example was the enhancement of controls for elevator operators when the average passenger could press the very same automated buttons. Or similarly, the attempt by horse-drawn carriage makers to keep up with auto makers until they firmly lost the battle a hundred years ago. Maybe Polaroid cameras were the transitional technology between film that needed to be developed at a factory and pictures you can take on your phone.
Also in May, Uber and its partner, Carnegie Mellon University, did a test drive of its first autonomous vehicle. Of course, Uber’s plans and its role in disrupting the traditional taxi industry had already led to dire predictions like this one on the website of the CBS TV station in San Francisco: “How Uber’s Autonomous Cars Will Destroy 10 Million Jobs And Reshape The Economy by 2025”.
Indeed, taking this idea to its ultimate extreme conclusion, the Guardian reported a few months ago that Tesla’s CEO, Elon Musk, wants to ban human driving altogether. They quote him as saying:
“You can’t have a person driving a two-tonne death machine”.
So while it will be fun, perhaps we’re just seeing the last gasp of human driving.
Recently there have been some interesting articles about how the graphic user interface we’ve had on our screens for many years is gradually being replaced by a new user interface – the conversation.
Earlier this month, Matt Gilligan wrote on his Medium blog:
Forget “there’s an app for that” — what’s next is “there’s a chat for that.”
Some of this is a result of the fact that people are more often using the web on their smart phones and tablets than on laptops and desktop computers. With bigger screens, the older devices have more room for a nice graphic interface than smartphones – even the newest smart phones that always seem to be bigger than the previous generation.
And many people communicate much of the day through conversations that are composed of text messages. There’s a good listing of some of the more innovative text apps in “Futures of text”.
The idea of a conversational interface is also a reflection of the use of various personal assistants that you talk to, like Siri. These, of course, have depended on developments in artificial technology, in particular the recognition and processing of natural (human) spoken language. Much research is being conducted to make these better and less the target of satire – like this one from the Big Bang Theory TV series.
There’s another branch of artificial intelligence research that should be resurrected from its relative oblivion to help out – expert systems. An expert system attempts to automate the kind of conversation – especially a dynamic, intelligent sequence of questions and answers – that would occur between a human expert and another person. (You can learn more at Wikipedia and GovLab.)
In the late 1980s and early 1990s, expert systems were the most hyped part of the artificial intelligence community.
As I’ve blogged before, I was one of those involved with expert systems during that period. Then that interest in expert systems rapidly diminished with the rise of the web and in the face of various technological obstacles, like the hard work of acquiring expert knowledge. More recently, with “big data” being collected all around us, the big focus in the artificial intelligence community has been on machine learning – having AI systems figure out what that data means.
But expert systems work didn’t disappear altogether. Applications have been developed for medicine, finance, education and mechanical repairs, among other subjects.
It’s now worth raising the profile of this technology much higher if the conversation becomes the dominant user interface. The reason is simple: these conversations haven’t been very smart. Most of the apps are good at getting basic information as if you typed it into a web browser. Beyond that? Not so much.
There are even very funny videos of the way these work or rather don’t work well. Take a look at “If Siri was your mom”, prepared for Mother’s Day this year with the woman who was the original voice of Siri as Mom.
In its simplest form, expert systems may be represented as a smart decision tree based on the knowledge and research of experts.
It’s pretty easy to see how this approach could be used to make sure that the conversation – by text or voice – is useful for a person.
There is, of course, much more sophistication available in expert systems than is represented in this picture. For example, some can handle probabilities and other forms of ambiguity. Others can be quite elaborate and can include external data, in addition to the answers from a person – for example, his/her temperature or speed of typing or talking.
The original developers of Siri have taken what they’ve learned from that work and are building their next product. Called “Viv: The Global Brain”, it’s still pretty much in stealth mode so it’s hard to figure out how much expert system intelligence is built into it. But a story about them on WIRED last year showed an infographic which implies that an expert system has a role in the package. See the lower left on the second slide.
Personally I like the shift to a conversational interface with technology since it becomes available in so many different places and ways. But I’ll really look forward to it when those conversations become smarter. I’ll let you know as I see new developments.
The
recent movie, the Imitation Game, brought attention to the Turing Test to a general audience. First proposed by the British mathematician
Alan Turing, the test basically proposes that computers will have achieved
artificial intelligence when a person interacting with that computer cannot
distinguish between it and another human being.
Last
year, it was reported that a machine successfully passed the Turing Test – sort
of. (See this article
in the Washington Post, for example.)
While that particular test didn’t set a very high standard, there is no
doubt that machines are getting better at doing things that humans only used to
do.
This
past Sunday, there was an article in the New York Times Weekly Review titled “If
an Algorithm Wrote This, How Would You Even Know?” Its (presumably human) author warned us that
“a shocking amount of what we’re reading is created not by humans, but by
computer algorithms.”
For
example, the Associated Press uses Wordsmith
from Automated Insights. With its Quill product, Narrative
Science originally started out with sports reporting, but is now moving into
other fields.
The
newspaper even offered its own version of the Turing Test – a test
of your ability to determine when a paragraph was written by a person or a
machine. Try it. (Disclosure: I didn’t get it right 100% of
the time, either.)
But,
of course, it doesn’t stop with writing.
More interesting is the use of computers to be creative.
This is because
among the differences between humans and other animals, that some people claim,
is our ability to produce creative works of art. Perhaps going back to the cave paintings,
this seems to be an unusual human trait.
(Of course, you can always find someone else who will dispute this, as
in this article, “12
artsy animals that paint”.)
While
we may find animals painting to be amusing, perhaps we’d find machines becoming
creative as more threatening.
Professor
Simon Colton is one of the leaders in this field – which, by the way, goes back
at least two decades. He has written:
“Our position is that, if we perceive that the software has been skillful, appreciative and imaginative, then, regardless of the behaviour of the consumer or programmer, the software should be considered creative.”
He and his team have worked with software called Painting Fool. This post has some examples of that artwork, so you can judge for yourself if you could tell a computer generated it.
I have my own little twist on this story from more than ten years ago. I met some artist/businessmen who designed and then had painted high end rock concert t-shirts. These were intended to be sold at the concert in relatively small quantities, as an additional form of revenue.
The artist would prepare the design and then have other people paint them. But this was a slow tedious process so we discussed the use of robots to take over this process. (At the time, the role of robotic painting machines in auto factories was becoming well known.)
One of the businessmen posed an obstacle by noting that people bought the hand-painted product because each was slightly different, given the variation between artists who painted them and even the subtle changes of one artist on any day. I somewhat shocked him by pointing out that, yes, even that kind of randomness could be computer generated and his customers would not likely be able to tell the difference.
But, perhaps, computers could tell the difference. A computer algorithm correctly identified Jackson Pollock paintings, as reported in a recent article in the International Journal of Art and Technology. (A less technical summary of this work can be found in a Science Spot article of a few weeks ago.)
In the end, they didn’t use robots because they were too expensive compared to artists in the Philippines or wherever it was they hired them. Now, the robots are much cheaper, so maybe I should revive the idea
Anyway, we’re likely to see even more impressive works of creativity by computer software and/or by artists working with computer software. The fun is just beginning.