Like many other people who have been watching the COVID-19 press conferences held by Trump and Cuomo, I came away with a very different feeling from each. Beyond the obvious policy and partisan differences, I felt there is something more going on.
Coincidentally, I’ve been doing some research on text analytics/natural language processing on a different topic. So, I decided to use these same research tools on the transcripts of their press conferences from April 9 through April 16, 2020. (Thank you to the folks at Rev.com for making available these transcripts.)
One of the best approaches is known by its initials, LIWC, and was created some time ago by Pennebaker and colleagues to assess especially the psycho-social dimensions of texts. It’s worth noting that this assessment is based purely on the text – their words – and doesn’t include non-verbal communications, like body language.
While there were some unsurprising results to people familiar with both Trump and Cuomo, there are also some interesting nuances in the words they used.
Here are the most significant contrasts:
The most dramatic distinction between the two had to do with emotional tone. Trump’s words had almost twice the emotional content of Cuomo’s, including words like “nice”, although maybe the use of that word maybe should not be taken at face value.
Trump also spoke of rewards/benefits and money about 50% more often than Cuomo.
Trump emphasized allies and friends about twenty percent more often than Cuomo.
Cuomo used words that evoked health, anxiety/pain, home and family two to three times more often than Trump.
Cuomo asked more than twice as many questions, although some of these could be sort of rhetorical – like “what do you think?”
However, Trump was 50% more tentative in his declarations than Cuomo, whereas Cuomo had greater expressions of certainty than Trump.
While both men spoke about the present tense much more than the future, Cuomo’s use of the present was greater than Trump’s. On the other hand, Trump’s use of the future tense and the past tense was greater than Cuomo’s.
Trump used “we” a little more often than Cuomo and much more than he used “you”. Cuomo used “you” between two and three times more often than Trump. Trump’s use of “they” even surpassed his use of you.
Distinctions of this kind are never crystal clear, even with sophisticated text analytics and machine learning algorithms. The ambiguity of human speech is not just a problem for machines, but also for people communicating with each other.
But these comparisons from text analytics do provide some semantic evidence for the comments by non-partisan observers that Cuomo seems more in command. This may be because the features of his talks would seem to better fit the movie portrayal and the average American’s idea of leadership in a crisis – calm, compassionate, focused on the task at hand.
There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.
In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.
Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.
Robots are also showing off their fun and increasingly creative side.
Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.
Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc
Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.
Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s
There is even AI that will generate poetry – um, song lyrics.
Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:
Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.
Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.
Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.
An altered state of manic episodes,
A journey through the long and winding roads.
Not exactly upbeat, but you could well imagine this being a song too.
This is a brief follow up to my last post about how librarians and artificial intelligence experts can
get us all beyond mere curation and our frustrations using web search
In their day-to-day Google searches many people end up frustrated. But they assume that the problem is their own lack of expertise in framing the search request.
In these days of advancing natural language algorithms that isn’t a very good explanation for users or a good excuse for Google.
We all have our own favorite examples, but here’s mine because it directly speaks to lost opportunities to use the Internet as a tool of economic development.
Imagine an Internet marketing expert who has an appointment with a local chemical engineering firm to make a pitch for her services and help them grow their business. Wanting to be prepared, she goes to Google with a simple search request: “marketing for chemical engineering firms”. Pretty simple, right?
Here’s what she’ll get:
She’s unlikely to live long enough to read all 43,100,000+ hits, never mind reading them before her meeting. And, aside from an ad on the right from a possible competitor, there’s not much in the list of non-advertising links that will help her understand the marketing issues facing a potential client.
This is not how the sum of all human knowledge – i.e., the Internet – is supposed to work. But it’s all too common.
This is the reason why, in a knowledge economy, I place such a great emphasis on deep organization, accessibility and relevance of information.
A few weeks ago, I wrote about the second chance given to libraries, as Google’s role in the life of web users slowly diminishes. Of course, for at least a few years, one of the responses of librarians to the growth of the digital world has been to re-envision libraries as curators of knowledge, instead of mere collectors of documents. It’s not a bad start in a transition.
Indeed, this idea has also been picked up by all kinds of online sites, not just libraries. Everyone it seems wants to aggregate just the right mix of articles from other sources that might interest you.
But, from my perspective, curation is an inadequate solution to the bigger problem this digital knowledge century has created – we don’t have time to read everything. Filtering out the many things I might not want to read at all doesn’t help me much. I still end up having too much to read.
And we end up in the situation summed up succinctly by the acronym TL;DR, too long, didn’t read. (Or my version in response to getting millions of Google hits – TMI, TLK “too much information, too little knowledge”.)
“How do we find topically relevant, semantically related, timely information in massive amounts of data in diverse languages, formats, and genres? Given the incredible amounts of information available today, merely reducing the size of the haystack is not enough; information professionals … require timely, focused answers to complex questions.”
Like NIST, what I really want – maybe what you want or need too? – is someone to summarize everything out there and create a new body of work that tells me just what I need to know in as few words as possible.
Researchers call this abstractive summarization and this is not an easy problem to solve. But there has been some interesting work going on in various universities and research labs.
At Columbia University, Professor Kathleen McKeown and her research colleagues developed “NewsBlaster” several years ago to organize and summarize the day’s news.
Among other companies, Automated Insights has developed some practical solutions to the overall problem. Their Wordsmith software has been used, for example, by the Associated Press “to transform raw earnings data into thousands of publishable stories, covering hundreds more quarterly earnings stories than previous manual efforts”.
The next step, of course, is to combine many different data sources and generate articles about them for each person interested in that combination of sources.
Just a few months ago, Salesforce’s research team announced a major advance in summarization. Their motivation, by the way, is the same as mine:
“In 2017, the average person is expected to spend 12 hours and 7 minutes every day consuming some form of media and that number will only go up from here… Today the struggle is not getting access to information, it is keeping up with the influx of news, social media, email, and texts. But what if you could get accurate and succinct summaries of the key points…?”
Maluuba, acquired by Microsoft, has been continuing earlier research too. As they describe their research on “information-seeking behaviour”:
“The research at Maluuba is tackling major milestones to create AI agents that can efficiently and autonomously understand the world, look for information, and communicate their findings to humans.”
Librarians have skills that can contribute to the development of this branch of artificial intelligence. While those skills are necessary, they aren’t sufficient and a joint effort between AI researchers and the library world is required.
However, if librarians joined in this adventure, they could also offer the means of delivering this focused knowledge to the public in a more useful way than just dumping it into the Internet.
For some time now, the library world and its supporters have worried about the rise of the Google search engine. Here’s just a sample of articles from the last ten years that express this concern and, of course, push back against the Google tide:
And there was also John Palfrey’s 2015 book, “BiblioTech: Why Libraries Matter More Than Ever in the Age of Google”, which shares some themes of this post.
This concern has had such a profound effect that many libraries have effectively curtailed their reference librarian services as people instead “Google it”.
No doubt Google is formidable. While there have been ups and downs (like 2015) in Google’s share of the search engine market, it is obviously very high. Some estimates put it at 80% or higher.
But the world is changing and perhaps librarians aren’t aware of a nascent opportunity.
In an article about a month ago, the data scientist Vincent Granville took a closer look at the data about the ways people search and get information. He found “The Slow Decline of Google Search”. Here are some of the highlights:
“Google’s influence (as a search engine) is declining. Not that their traffic share or revenue is shrinking, to the contrary, both are probably increasing.”
“The decline (and weakening of monopoly) is taking place in a subtle way. In short, Google is no longer the first source of information, for people to find an article, a document, or anything on the Internet.”
“What has happened over the last few years is that many websites are now getting most of their traffic from sources other than Google.”
“Google has lost its monopoly when it comes to finding interesting information on the Internet.”
“Interestingly, this creates an opportunity for entrepreneurs willing to develop a search engine.”
As the New York Times reported recently about the announcement of the new Pixel phone, Google has noticed all this and is strategically re-positioning itself as an artificial intelligence company.
What has this got to do with the Apple story?
Apple is now the most valuable company in the world. That wasn’t always so. Indeed, it almost was headed for oblivion as the chart shows. Even now, its earlier business of selling personal computers hasn’t grown that much. It was able to add to its mix of products and services in a compelling way. It is one of the great turnaround stories in business history.
That history offers a lesson for librarians. The battle against what Google originally offered has been a tough one and libraries have suffered in the eyes of many people, especially the public officials and other leaders who provide their funding.
But looking forward, libraries should consider the opportunities arising from the fact that Google’s impact on Internet users is lessening, that the shine of Google’s “do no evil” slogan has worn off in the face of greater public skepticism and that artificial intelligence – really augmented human intelligence – is now a viable, disruptive technology.
As many once great and now defunct companies, other than Apple, show, there aren’t many second chances. Libraries should take advantage of its second chance to play the role that they should
When we drive in our cars, we mostly have a sense of common rules for the road to keep us all safe. Now that we begin to see driverless cars, there are similar issues for the behavior of those cars and even ethical questions. For example, in June, the AAAS’s Science magazine reported on a survey of the public’s attitudes in answer to the story’s title: “When is it OK for our cars to kill us?”
Driverless cars are just one instance of the gradual and continuing improvement in artificial intelligence which has led to many articles about the ethical concerns this all raises. A few days ago, the New York Times had a story on its website about “How Tech Giants Are Devising Real Ethics for Artificial Intelligence”, in which it noted that “A memorandum is being circulated among the five companies with a tentative plan to announce the new organization in the middle of September.”
Of course, this isn’t all new. About 75 years ago, the author Isaac Asimov formally introduced his famous Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Even before robots came along, ethics was focused on the interactions between people and how they should not harm and conflict with each other – “do unto others …”. As artificial intelligence becomes a factor in our world, many people feel the need to extend this discussion to robots.
These are clearly important issues to us, human beings. Not surprisingly, however, these articles and discussions have a human-centric view of the world.
Much less – indeed very little – consideration has been given to how artificial intelligence agents and robots interact with each other. And we don’t need to wait for self-aware or superhuman robots to consider this.
Even with billions of not so intelligent devices that are part of the Internet of Things, problems have arisen.
This is, after all, an environment in which the major players haven’t yet agreed on basic standardsand communications protocols between devices, never mind how these devices should interact with each other beyond merely communicating.
But they will interact somehow and they will become much more intelligent – embedded AI. Moreover, there will be too many of these devices for simple human oversight, so instead, at best, oversight will come from other machines/things, which in turn will be players in this machine-to-machine world.
The Internet Society in its report on the Internet of Things last year at least began to touch on these concerns.
As the inventors and producers of these things that we are rapidly connecting, we need to consider all the ways that human interactions can go wrong and think about the similar ways machine to machine interactions can go wrong. Then, in addition to basic protocols, we need to determine the “rules of the road” for these devices.
Coming back full circle to the impact on human beings, we will be affected if the increasingly intelligent, machine-to-machine world that we depend on is embroiled in its own conflicts. As the Kenyan proverb goes (more or less):
“When elephants fight, it is the grass that suffers.”
Some of my blog posts seem to be ahead of news reported elsewhere,
which is ok with me, but also means that it might be helpful to list
some interesting articles that continue past stories. Here are some
My two-part series in March on the Coding Craze
questioned the long term value of the plan by many public officials to
teach computer coding. While the general news media continue to talk and
write about coding as an elixir for your career, WIRED Magazine
recently ran a cover story titled “The End of Code”. See their web
piece at http://www.wired.com/2016/05/the-end-of-code/
written several posts on one of my special interests – the related
subjects of mixed reality, virtual reality, blended physical and digital
spaces. I noted sports as a natural for this, including highlighting the Trilite project last year. So it was great to read
the announcement in the last few days that NBC and Samsung are
collaborating to offer some of the Rio Olympics on Samsung VR gear.
all inundated with talk about how “things are changing faster than ever
before” in our 21st century world. Taking an unconventional view, in
2011, I asked “Telegraph vs. Internet: Which Had Greater Impact?”
My argument was that the first half of the 19th century had much more
dramatic changes, especially in speeding up communications. In what I
think is the first attempt to question the fastest-ever-changes meme,
the New York Times Magazine also recently elaborated on this theme in an
Upshot article titled “What Was the Greatest Era for Innovation? A Brief Guided Tour”.
In “Art and the Imitation Game”,
March 2015, I wrote about how artificial intelligence is stepping into
creative activities, like writing and painting. While there have been
many articles on this subject since, one of the most intriguing was from
the newspaper in the city with more attorneys per capita than anywhere
else, as the Washington Post invited us to “Meet ‘Ross,’ the newly hired legal robot”.
I wrote about the White House Rural Telehealth meeting in April this year. The New York Times later had a report on the rollout of telehealth to the tens of millions of customers of Anthem, under the American Well label.
back several years and in both that post and one on “The
Decentralization Of Health Care” about a year and a half ago, I’ve
touched on the difficulties posed by the fee for service health care
system in the US and instead wondered if we would be better off by
paying health systems a yearly fee to keep us healthy – thus aligning
our personal interests with those of the system. So it has been
interesting to see in April that there was movement on this by the
Center for Medicare and Medicaid Services (CMMS), which is the Federal
government’s health insurance agency. Here are just some examples:
In our post-industrial, Internet world, an ever increasing percentage
of the population has an ever increasing need for knowledge to make a
living. This is why people have used the Internet’s search engines so
much, despite being frequently frustrated by the volume and irrelevance
of search results. They may also be suspicious of the bias and
commercialism built into the results. Most of all, people intuitively
grasp that search results are not the same thing as the knowledge they
Thus, if I had to point to a single service that
would dramatically raise the economic importance of libraries in this
century, it would be satisfying this need in a substantive and objective
Yet, if you go to most dictionaries, you’ll find a
definition of a library like this one from the Oxford Dictionary:
building or room containing collections of books, periodicals, and
sometimes films and recorded music for people to read, borrow, or refer
While few people would say that libraries
shouldn’t provide books, as long as people want them, most librarians
would point to the many services they have provided beyond collecting
Nevertheless, the traditional definition
continues to limit the way too many librarians think. Even among those
who object to the narrow definition in the dictionary, these two
traditional assumptions about libraries are usually unquestioned:
Library services are mostly delivered in a library building.
Library services are mostly delivered by human beings.
argument here is simple: If libraries are to meet the public needs of a
21st century knowledge economy, librarians must lift these self-imposed
constraints. It is time to free the library and library services!
isn’t as radical as it sounds. If we look deeper, more conceptually,
at what has gone on in libraries, libraries services are about the
community’s reserve of knowledge and sharing of information — and
helping members of the community find what they need quickly, accurately
and without bias. I’m proposing nothing different, except expanding
the ways that libraries do this job.
The first of these two
assumptions is the simplest one to abandon. Although the library
building remains the focus for many in the profession, in various ways,
virtual services are available through the web, chat, email or even Skype. (I’ve written
before about the ways that library reference services could become
available anywhere and be much improved through a national
The second assumption – the necessity for a human librarian at almost all points of service — will be a tougher one to discard.
though, one of the most important of the emerging, disruptive
technologies – artificial intelligence and machine learning – which can
supplement and enhance the ability of librarians to deliver information
services well and at a scale appropriate for the large demand.
My hope is that, working with software and artificial intelligence experts, librarians
will start creating machine learning and artificial intelligence
services that will make in-depth, unbiased knowledge guidance and
information reference universally available.
successfully as a national project will enable the library as an
institution, if not a building, to reclaim its role as information
central for people of all ages.
the last several years, there have been a few experiments in using
artificial intelligence to supplement reference services provided by
human librarians. In the UK, the University of Wolverhampton offers its
“Learning & Information Services Chatbot”.
A few weeks ago, the Knight News Challenge selected the Charlotte Mecklenberg Public Library’s DALE project with IBM Watson and described it as “the first AI enabled search portal within a public library setting.”
In a note that is very much in accord with my argument, they wrote:
are the unsung heroes of the Information Age. In a world where
everyone Googles for the right answer, many are unaware of the wealth of
information that libraries have within their physical and digital
collections.… DALE would be able to analyze the structured and
unstructured data hidden within the public library’s vast collections,
helping both staff and customers locate the information needed within
one search setting.”
Despite the needs of library patrons, so far these examples are still rare for a couple of reasons.
people argue that libraries shouldn’t and maybe can’t compete with the
big corporations, like Apple and Google, in helping people find the
knowledge they need. As I’ve already noted above, many users experience
these commercial services as a poor substitute for what they want.
any case, abdicating its own responsibility is a disservice to library
patrons and the public who have looked to libraries for objective,
non-commercial information services for a very long time.
also a fear that wider use of artificial intelligence to help provide
library services might put human librarians out of work. While that is
not a concern that librarians generally discuss publicly, Steven Bell,
Associate University Librarian of Temple University, wrote last month in
Library Journal about this very subject – the potential for artificial
intelligence to diminish the need for librarians. He called it the “Promise and Peril of AI for Academic Librarians”, although the article seemed to focus more on the peril.
is the fear of every worker faced with the onslaught of technology and
the resulting prospect of delivering more output in fewer hours. With
artificial intelligence and related robotics, workers in industries
where demand is not accelerating – like cars – may very well have
something to worry about.
But the reality for librarians is
different. The demand for information services is accelerating so that
even in the face of greater productivity per person, employment
prospects shouldn’t diminish.
Indeed, if these library services
become real and gain traction, increasing demand for them and for the
librarians that make them possible will also increase because the
knowledge creates a demand for new knowledge. To use an ungainly and
somewhat distasteful analogy, it is like an arms race.
is neither about corporate competition nor unemployment. Rather my fear
is that the library profession will not easily abandon its self-imposed
limitations and will not expand its presence and champion new
technology for its services. If those limitations remain, the public –
having been forced to go elsewhere to meet their needs – will in the end
devalue and reduce their support for libraries.
There have been all kinds of fun new ways that technology has become embedded into cars to help drivers.
Last Friday, the New York Times had an article about Audi’s testing what might be described as a driver-assisted race car, going 120 miles per hour.
Just last month, Samsung demonstrated a way to see through truck on country roads in Argentina. It was intended to help a driver know when it’s safe to pass and overtake the truck. But, even those of us who get stuck in massive urban traffic jams, would love the ability to see ahead. (See the picture above.)
Another version of the same idea was developed and unveiled last month by the Universitat Politècnica de València, Spain. They call their version EYES and you can see a report about it at https://youtu.be/eUQfalxPK0o
There have been variations on this theme over the last year or so, but so far the deployment of the technology hasn’t happened on real roads for regular drivers.
But Ford Motor Company announced a couple of weeks ago that it will start to equip a car this year with split view cameras that let drivers see around corners. They say it’s especially useful when backing into traffic. This is supposed to be a feature of their worldwide fleet of cars by 2020.
In the old days, when a driver had to maneuver into a tight corner, he/she asked a friend to stand outside the car and provide instructions. Now, Land Rover is helping the driver who is alone – without friends? – to get a better view and control the car at the same time by using a smart phone app.
Is this all a good thing? The New York Times had this quote in its Audi story:
“At this point, substantial effort in the automotive community is focused on developing fully autonomous driving technology,” said Karl Iagnemma, an automotive researcher at M.I.T. “Far less effort is focused on developing methods to allow a driver to intuitively and safely interact with the highly automated driving vehicle.”
Nevertheless, while these features are surely helpful, on balance, they seem to me to be transitional technologies. (Allen Wirfs-Brock provided this helpful slide on the subject.)
A good example was the enhancement of controls for elevator operators when the average passenger could press the very same automated buttons. Or similarly, the attempt by horse-drawn carriage makers to keep up with auto makers until they firmly lost the battle a hundred years ago. Maybe Polaroid cameras were the transitional technology between film that needed to be developed at a factory and pictures you can take on your phone.
Some of this is a result of the fact that people are more often using the web on their smart phones and tablets than on laptops and desktop computers. With bigger screens, the older devices have more room for a nice graphic interface than smartphones – even the newest smart phones that always seem to be bigger than the previous generation.
And many people communicate much of the day through conversations that are composed of text messages. There’s a good listing of some of the more innovative text apps in “Futures of text”.
The idea of a conversational interface is also a reflection of the use of various personal assistants that you talk to, like Siri. These, of course, have depended on developments in artificial technology, in particular the recognition and processing of natural (human) spoken language. Much research is being conducted to make these better and less the target of satire – like this one from the Big Bang Theory TV series.
There’s another branch of artificial intelligence research that should be resurrected from its relative oblivion to help out – expert systems. An expert system attempts to automate the kind of conversation – especially a dynamic, intelligent sequence of questions and answers – that would occur between a human expert and another person. (You can learn more at Wikipedia and GovLab.)
In the late 1980s and early 1990s, expert systems were the most hyped part of the artificial intelligence community.
As I’ve blogged before, I was one of those involved with expert systems during that period. Then that interest in expert systems rapidly diminished with the rise of the web and in the face of various technological obstacles, like the hard work of acquiring expert knowledge. More recently, with “big data” being collected all around us, the big focus in the artificial intelligence community has been on machine learning – having AI systems figure out what that data means.
But expert systems work didn’t disappear altogether. Applications have been developed for medicine, finance, education and mechanical repairs, among other subjects.
It’s now worth raising the profile of this technology much higher if the conversation becomes the dominant user interface. The reason is simple: these conversations haven’t been very smart. Most of the apps are good at getting basic information as if you typed it into a web browser. Beyond that? Not so much.
There are even very funny videos of the way these work or rather don’t work well. Take a look at “If Siri was your mom”, prepared for Mother’s Day this year with the woman who was the original voice of Siri as Mom.
In its simplest form, expert systems may be represented as a smart decision tree based on the knowledge and research of experts.
It’s pretty easy to see how this approach could be used to make sure that the conversation – by text or voice – is useful for a person.
There is, of course, much more sophistication available in expert systems than is represented in this picture. For example, some can handle probabilities and other forms of ambiguity. Others can be quite elaborate and can include external data, in addition to the answers from a person – for example, his/her temperature or speed of typing or talking.
The original developers of Siri have taken what they’ve learned from that work and are building their next product. Called “Viv: The Global Brain”, it’s still pretty much in stealth mode so it’s hard to figure out how much expert system intelligence is built into it. But a story about them on WIRED last year showed an infographic which implies that an expert system has a role in the package. See the lower left on the second slide.
Personally I like the shift to a conversational interface with technology since it becomes available in so many different places and ways. But I’ll really look forward to it when those conversations become smarter. I’ll let you know as I see new developments.