Robots Just Want To Have Fun!

There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.

In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.

Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.

Robots are also showing off their fun and increasingly creative side.

Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.

Dancing

Last August, there was a massive robot dance in Guangzhou, China. It achieved a Guinness World Record for for the “most robots dancing simultaneously”. See https://www.youtube.com/watch?v=ouZb_Yb6HPg or http://money.cnn.com/video/technology/future/2017/08/22/dancing-robots-world-record-china.cnnmoney/index.html

Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc

Music

Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.

The Flow Machines project in Europe has also been using AI to create music in the style of more famous composers. See, for instance, its DeepBach, “a deep learning tool for automatic generation of chorales in Bach’s style”. https://www.youtube.com/watch?time_continue=2&v=QiBM7-5hA6o

Singing

Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s

Also, last August, there was a song composed by AI, although sung by a human – https://www.youtube.com/watch?v=XUs6CznN8pw&feature=youtu.be

There is even AI that will generate poetry – um, song lyrics.

Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:

Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.

Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.

Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.

An altered state of manic episodes,
A journey through the long and winding roads.

Not exactly upbeat, but you could well imagine this being a song too.

Finally, there is even the HRP-4C (Miim), which has been under development in Japan for years. Here’s her act –  https://www.youtube.com/watch?v=QCuh1pPMvM4#t=3m25s

All singing, all dancing, indeed!

© 2018 Norman Jacknis, All Rights Reserved

More Than A Smart City?

The huge Smart Cities New York 2018 conference started today. It is billed as:

“North America’s leading global conference to address and highlight critical solution-based issues that cities are facing as we move into the 21st century. … SCNY brings together top thought leaders and senior members of the private and public sector to discuss investments in physical and digital infrastructure, health, education, sustainability, security, mobility, workforce development, to ensure there is an increased quality of life for all citizens as we move into the Fourth Industrial Revolution.”

A few hours ago, I helped run an Intelligent Community Forum Workshop on “Future-Proofing Beyond Tech: Community-Based Solutions”. I also spoke there about “Technology That Matters”, which this post will quickly review.

As with so much of ICF’s work, the key question for this part of the workshop was: Once you’ve laid down the basic technology of broadband and your residents are connected, what are the next steps to make a difference in residents’ lives?

I have previously focused on the need for cities to encourage their residents to take advantage of the global opportunities in business, education, health, etc. that becomes possible when you are connected to the whole world.

Instead in this session, I discussed six steps that are more local.

1. Apps For Urban Life

This is the simplest first step and many cities have encouraged local or not-so-local entrepreneurs to create apps for their residents.

But many cities that are not as large as New York are still waiting for those apps. I gave the example of Buenos Aires as a city that didn’t wait and built more than a dozen of its own apps.

I also reminded attendees that there are many potential, useful apps for their residents which cannot justify enough profit to be of interest to the private sector, so the government will have to create these apps on their own.

2. Community Generation Of Urban Data

While some cities have posted their open data, there is much data about urban life that the residents can collect. The most popular example is the community generation of environmental data, with such products like the Egg, the Smart Citizen Kit for Urban Sensing, the Sensor Umbrella and even more sophisticated tools like Placemeter.

But the data doesn’t just have to be about the physical environment. The US National Archives has been quite successful in getting citizen volunteers to generate data – and meta-data – about the documents in its custody.

The attitude which urban leaders need is best summarized by Professor Michael Batty of the University College London:

“Thinking of cities not as smart but as a key information processor is a good analogy and worth exploiting a lot, thus reflecting the great transition we are living through from a world built around energy to one built around information.”

3. The Community Helps Make Sense Of The Data

Once the data has been collected, someone needs to help make sense of it. This effort too can draw upon the diverse skills in the city. Platforms like Zooniverse, with more than a million volunteers, are good examples of what is called citizen science. For the last few years, there has been OpenData Day around the world, in which cities make available their data for analysis and use by techies. But I would go further and describe this effort as “popular analytics” – the virtual collaboration of both government specialists and residents to better understand the problems and patterns of their city.

4. Co-Creating Policy

Once the problems and opportunities are better understood, it is time to create urban policies in response.  With the foundation of good connectivity, it becomes possible for citizens to conveniently participate in the co-creation of policy. I highlighted examples from the citizen consultations in Lambeth, England to those in Taiwan, as well as the even more ambitious CrowdLaw project that is housed not far from the Smart Cities conference location.

5. Co-Production Of Services

Then next is the execution of policy. As I’ve written before, public services do not necessarily always have to be delivered by paid civil servants (or even better paid companies with government contracts). The residents of a city can help be co-producers of services, as exemplified in Scotland and New Zealand.

6. Co-Creation Of The City Itself

Obviously, the people who build buildings or even tend to gardens in cities have always had a role in defining the physical nature of a city. What’s different in a city that has good connectivity is the explosion of possible ways that people can modify and enhance that traditional physical environment. Beyond even augmented reality, new spaces that blend the physical and digital can be created anywhere – on sidewalks, walls, even in water spray. And the residents can interact and modify these spaces. In that way, the residents are constantly co-creating and recreating the urban environment.

The hope of ICF is that the attendees at Smart Cities New York start moving beyond the base notion of a smart city to the more impactful idea of an intelligent city that uses all the new technologies to enhance the quality of life and engagement of its residents.

© 2018 Norman Jacknis, All Rights Reserved

When Strategic Thinking Needs A Refresh

This year I created a new, week-long, all-day course at Columbia University on Strategy and Analytics. The course focuses on how to think about strategy both for the organization as a whole as well as the analytics team. It also shows the ways that analytics can help determine the best strategy and assess how well that strategy is succeeding.

In designing the course, it was apparent that much of the established literature in strategy is based on ideas developed decades ago. Michael Porter, for example, is still the source of much thinking and teaching about strategy and competition.

Perhaps a dollop of Christensen’s disruptive innovation might be added into the mix, although that idea is not any longer new. Worse, the concept has become so popularly diluted that too often every change is mistakenly treated as disruptive.

Even the somewhat alternative perspective described in the book “Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant” is now more than ten years old.

Of the well-established business “gurus”, perhaps only Gary Hamel has adjusted his perspective in this century – see, for example, this presentation.

But the world has changed. Certainly, the growth of huge Internet-based companies has highlighted strategies that do not necessarily come out of the older ideas.

So, who are the new strategists worthy of inclusion in a graduate course in 2018?

The students were exposed to the work of fellow faculty at Columbia University, especially Leonard Sherman’s “If You’re in a Dogfight, Become a Cat! – Strategies for Long-Term Growth” and Rita Gunther McGrath’s “The End Of Competitive Advantage: How To Keep Your Strategy Moving As Fast As Your Business”.

But in this post, the emphasis in on strategic lessons drawn from this century’s business experience with the Internet, including multi-sided platforms and digital content traps. For that there is “Matchmakers – the new economics of multisided platforms” by David S Evans and Richard Schmalensee. And also Bharat Anand’s “The Content Trap: A Strategist’s Guide to Digital Change”.

For Porter and other earlier thinkers, the focus was mostly on the other players that they were competing against (or decided not to compete against). For Anand, the role of the customer and the network of customers becomes more central in determining strategy. For Evans and Schmalensee, getting a network of customers to succeed is not simple and requires a different kind of strategic framework than industrial competition.

Why emphasize these two books? It might seem that these books only focus on digital businesses, not the traditional manufacturers, retailers and service companies that previous strategists worked at.

But many now argue that all businesses are digital, just to varying degrees. For the last few year we’ve seen the repeated headline that “every business is now a digital business” (or some minor variation) from Forbes, Accenture, the Wharton School of the University of Pennsylvania, among others you may not have heard of. And about a year ago, we read that “Ford abruptly replaces CEO to target digital transformation”.

Consider then the case of GE, one of the USA’s great industrial giants, which offers a good illustration of the situation facing many companies. A couple of years ago, it expressed its desire to “Become a Digital Industrial Company”. Last week, Steve Lohr of the New York Times reported that “G.E. Makes a Sharp ‘Pivot’ on Digital” because of its difficulty making the transition to digital and especially making the transition a marketing success.

At least in part, the company’s lack of success could be blamed on its failure to fully embrace the intellectual shift from older strategic frameworks to the more digital 21st century strategy that thinkers like Anand, Evans and Schmalensee describe.

© 2018 Norman Jacknis, All Rights Reserved

Too Many Unhelpful Search Results

This is a brief follow up to my last post about how librarians and artificial intelligence experts can
get us all beyond mere curation and our frustrations using web search

.

In their day-to-day Google searches many people end up frustrated. But they assume that the problem is their own lack of expertise in framing the search request.

In these days of advancing natural language algorithms that isn’t a very good explanation for users or a good excuse for Google.

We all have our own favorite examples, but here’s mine because it directly speaks to lost opportunities to use the Internet as a tool of economic development.

Imagine an Internet marketing expert who has an appointment with a local chemical engineering firm to make a pitch for her services and help them grow their business. Wanting to be prepared, she goes to Google with a simple search request: “marketing for chemical engineering firms”. Pretty simple, right?

Here’s what she’ll get:

She’s unlikely to live long enough to read all 43,100,000+ hits, never mind reading them before her meeting. And, aside from an ad on the right from a possible competitor, there’s not much in the list of non-advertising links that will help her understand the marketing issues facing a potential client.

This is not how the sum of all human knowledge – i.e., the Internet – is supposed to work. But it’s all too common.

This is the reason why, in a knowledge economy, I place such a great emphasis on deep organization, accessibility and relevance of information.

© 2017 Norman Jacknis, All Rights Reserved

What Comes After Curation?

[Note: I’m President of the board of the Metropolitan New York Library Council, but this post is only my own view.]

A few weeks ago, I wrote about the second chance given to libraries, as Google’s role in the life of web users slowly diminishes. Of course, for at least a few years, one of the responses of librarians to the growth of the digital world has been to re-envision libraries as curators of knowledge, instead of mere collectors of documents. It’s not a bad start in a transition.

Indeed, this idea has also been picked up by all kinds of online sites, not just libraries. Everyone it seems wants to aggregate just the right mix of articles from other sources that might interest you.

But, from my perspective, curation is an inadequate solution to the bigger problem this digital knowledge century has created – we don’t have time to read everything. Filtering out the many things I might not want to read at all doesn’t help me much. I still end up having too much to read.

And we end up in the situation summed up succinctly by the acronym TL;DR, too long, didn’t read. (Or my version in response to getting millions of Google hits – TMI, TLK “too much information, too little knowledge”.)

The AQAINT project (2010) of the US government’s National Institute of Standards and Technology (NIST) stated this problem very well:

“How do we find topically relevant, semantically related, timely information in massive amounts of data in diverse languages, formats, and genres? Given the incredible amounts of information available today, merely reducing the size of the haystack is not enough; information professionals … require timely, focused answers to complex questions.”

Like NIST, what I really want – maybe what you want or need too? – is someone to summarize everything out there and create a new body of work that tells me just what I need to know in as few words as possible.

Researchers call this abstractive summarization and this is not an easy problem to solve. But there has been some interesting work going on in various universities and research labs.

At Columbia University, Professor Kathleen McKeown and her research colleagues developed “NewsBlaster” several years ago to organize and summarize the day’s news.

Among other companies, Automated Insights has developed some practical solutions to the overall problem. Their Wordsmith software has been used, for example, by the Associated Press “to transform raw earnings data into thousands of publishable stories, covering hundreds more quarterly earnings stories than previous manual efforts”.

For all their clients, they claim to produce “over 1.5 billion narratives annually”. And these are so well done that the New York Times had an article about it that was titled “If An Algorithm Wrote This, How Would You Even Know?”.

The next step, of course, is to combine many different data sources and generate articles about them for each person interested in that combination of sources.

Just a few months ago, Salesforce’s research team announced a major advance in summarization. Their motivation, by the way, is the same as mine:

“In 2017, the average person is expected to spend 12 hours and 7 minutes every day consuming some form of media and that number will only go up from here… Today the struggle is not getting access to information, it is keeping up with the influx of news, social media, email, and texts. But what if you could get accurate and succinct summaries of the key points…?”

Maluuba, acquired by Microsoft, has been continuing earlier research too. As they describe their research on “information-seeking behaviour”:

“The research at Maluuba is tackling major milestones to create AI agents that can efficiently and autonomously understand the world, look for information, and communicate their findings to humans.”

Librarians have skills that can contribute to the development of this branch of artificial intelligence. While those skills are necessary, they aren’t sufficient and a joint effort between AI researchers and the library world is required.

However, if librarians joined in this adventure, they could also offer the means of delivering this focused knowledge to the public in a more useful way than just dumping it into the Internet.

As I’ve blogged a few months ago:

Librarians have many skills to add to the task of “organizing the world’s information, and making it universally accessible”. But as non-profit organizations interested in the public good, libraries can also ensure that the next generation of knowledge tools – surpassing Google search – is developed for non-commercial purposes.

So, what comes after everyone has tried curation? Abstractive summarization aided by artificial intelligence software, that’s what!

© 2017 Norman Jacknis, All Rights Reserved

What Can You Learn From Virtual Mirrors?

A virtual mirror allows someone to use a camera and have that image displayed on a large LED screen. Better yet, with the right software, it can change the image. With that ability, virtual mirrors have been used to see what new glasses look like or to try on dresses – a virtual, flexible fitting room.

image

Virtual mirrors and their equivalent as smart phone apps have been around for the last couple of years. There are examples from all over the world. Here are just a couple:

image
image

Marketers have already thought of extending this to social media, as one newspaper reported with a story titled “Every woman’s new best friend? Hyper-realistic new virtual mirror lets you to try on clothes at the flick of the wrist and instantly share the images online”.

This all provides a nice experience for customers and may even help sell a particular item to them. But that’s only the beginning.

Virtual mirrors are a tremendous source of data about consumer behavior. Consider that the system can record every item the consumer looked at and then what she or he bought. Add to that the information about the person that can be detected – hair color, height, etc. With the application of the right analytics, a company can develop insights about how and why some products are successful – for example a particular kind of dress may be what short or tall women are really looking for.

With eye tracking devices, such as those from Tobii, connected to the virtual mirror, even more data can be collected on exactly what the consumer is looking at – for example, the last part of a dress that she looked at before deciding to buy or not to buy.

Going beyond that, an analysis can be done of facial (and body) expressions. I’ve written before about affective computing which is the technology is developing to do and to respond to this kind of measurement.  

[For some additional background on affective computing, see Wikipedia and MIT Media Lab’s website.]

By fully gathering all the data surrounding a consumer’s use of the virtual mirror, its value becomes much more than merely improving the immediate customer experience. In a world of what many consider big data, this adds much more data for the analytics experts on the marketing and product teams to investigate.

Alas, I haven’t seen widespread adoption and merger of these technologies. But the first retailer to move forward this way will have a great competitive advantage. This is especially true for brick-and-mortar retailers who can observe and measure a wider range of consumer behavior than can their purely e-commerce competitors.

16.00

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Calibri”,sans-serif;
mso-bidi-font-family:”Times New Roman”;}

© 2017 Norman Jacknis, All Rights
Reserved

The War Is Over! Big Cities Win

In Tuesday’s New York Times, there was an article explaining “Why Big Cities Thrive, and Smaller Ones Are Being Left Behind” as the headline put it. The article was filled sad stories about small cities and small metro areas facing “dismal performance”. It was said that, in the face of a global technological revolution, these cities “may be too small to [adapt and to] survive.”

The accompanying graphic is a vivid demonstration of the point that the economic war is over and big cities have won – with their huge urban concentrations of people. And so, the author of the article ends it with advice that

“the future for the residents of small-city America looks dim. Perhaps the best policy would be to help them move to a big city nearby.”

There is no doubt that the graphic image is correct and that many small cities, towns and rural areas have suffered economically over the last couple of decades.

How could this have happened when the Internet and technology was supposed, instead, to “kill distance” and diminish the importance of big cities?  

I have argued that we are not really in the Internet age, despite – or because – of all the chatting, social media and email. A virtual version of the kind of casual conversations and interactions that happen in cities is still missing. The way Internet technology is used today limits our interactions. But that situation won’t last forever as more people, including those outside of the big metro areas, finally do get and use ubiquitous, easy and transparent videoconferencing.

This reminds me of my experience with the impact of the web on newspapers. When the web was first becoming popular, I was with a company working on software that was intended in part to help newspapers make the transition to a digital world. Although we weren’t successful in getting most newspapers to respond to the challenge (and opportunity), I was witness to the online discussions of newspaper employees as they struggled with the web phenomenon.

Through most of the 1990s, they were mildly concerned about the threat. When the dot-com bubble burst during 2000, these folks reassured each other that this web thing was indeed a passing fad. Shortly after that widespread agreement that the predictions of the impact of technology were mistaken, newspapers starting to decline and shed staff.

Bill Gates has provided another way of looking at this:

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

A hint that there is more to the story could be found the same day in another New York Times article by their long-time technology reporter Steve Lohr. The story, “Start-Up Bets on Tech Talent Pipeline From Africa”, reported about a pool of tech talent in Lagos, Nairobi and Kampala. While Lagos and Nairobi are fairly large cities, none of these three cities is what people normally think about when talking about the metro areas that are the flagships of the global economy. They are not New York or London or San Francisco.

Yesterday (10/11/17) another NY Times story, “As ‘Unicorns’ Emerge, Utah Makes a Case for Tech Entrepreneurs”, appeared about the

“thriving technology hub in the roughly 80-mile swath from Provo to Ogden, with Salt Lake City in between. The region has given rise to at least five companies valued at more than $1 billion.”

Among those featured was Domo, an analytics company based in American Fork, Utah.

At ICF, we’ve seen a number of small cities and other non-metro areas that have flourished by taking advantage of the Internet and using broadband to connect their residents to anyone in the world.

While the wealth and advantages large metros have inherited from the industrial age are still being reflected in their role today, as we continue into this century, the intelligent use of technology to build thriving communities and quality of life will help cities of any size. So perhaps the obituary of small towns is not just premature, but misleading.

© 2017 Norman Jacknis, All Rights Reserved

The Rebirth Of The Learning Organization

Among the more ambitious and expansive CEOs, there’s a special kind of holy grail – transforming their organizations into learning organizations.   Jack Welch, former and famous CEO of GE, put it this way in the 1990s:

“an organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive business advantage.”

The Business Dictionary defines a learning organization as an

“Organization that acquires knowledge and innovates fast enough to survive and thrive in a rapidly changing environment. Learning organizations (1) create a culture that encourages and supports continuous employee learning, critical thinking, and risk taking with new ideas, (2) allow mistakes, and value employee contributions, (3) learn from experience and experiment, and (4) disseminate the new knowledge throughout the organization for incorporation into day-to-day activities.”

Or as Peter Senge, one of the founders of the learning organization movement, has famously said in 1990:

“A learning organization is an organization that is continually expanding its capacity to create its future.”

As you can see, this dream started more than 25 years ago.

By the first decade of this century, however, Mark Smith wrote:

“[W]hile there has been a lot of talk about learning organizations it is very difficult to identify real-life examples.”

The companion in thought of the learning organization movement was the knowledge management movement. Its goal was to capture, organize and distribute knowledge among the members of an organization.

But, in his 2014 paper, “A Synthesis of Knowledge Management Failure Factors”, Alan Frost was already conducting autopsies for the failure of knowledge management initiatives in many organizations.

In some ways, this reminds me of the first great wave of Artificial Intelligence in the 1980s when a lot of effort went into trying to codify the knowledge of experts into expert systems by extensively questioning them about their decision processes. It turns out that it is hard to do that.

Often experts – the knowledgeable ones – can’t really articulate their decision rules and to make matters worse those rules are at times probabilistic. Like other humans, experts often seek to develop rubrics to simplify a problem, which unfortunately can limit their ability to continue to observe what’s happening. Human perception, in general, in an imperfect instrument.

Thus, even if an organization is successful in widely distributing the knowledge developed by its staff, it may be just propagating ideas that are, at best, only partly true.

All of these factors, and many others, has slowed down the march to the dream of learning organizations.

But now we are possibly at the beginning of a rebirth of the learning organization.

What’s different today? Analytics and big data make the process of organizational learning much easier and better. Indeed, it is perhaps time to add analytics as a sixth discipline to Senge’s Five Disciplines of a learning organization.

After all, it’s not that people don’t know things that are important to an organization’s success – it’s just that they don’t know everything that is important and they can’t or won’t keep up with the torrent of new data vying for their attention.

The traditional gathering of the human knowledge combined with the continuously improving analytics models can achieve the dream so nicely stated by the executives and visionaries of twenty-five years ago.

For example, instead of trying to interview experts at length to capture their knowledge, today, someone in analytics would prefer to review the thousands of cases where the characteristics of the case, the decision by an expert and outcome was known. Then some kind of machine learning would search for the underlying patterns. In that way, the expert’s tacit understanding of the world would arise out of the analytics.

Nor does this cut out experts in the knowledge acquisition process. It just changes their role from being a memoirist. Instead, the experts can help kick off the building of the model and even assist in interpreting the results of the analytics.

Once the learning has begun, there is still much to learn (no pun intended) from the pioneers of this field. While they had great difficulty obtaining the knowledge – feeding the learning organization – they knew the rules of distributing that knowledge and making it useful. That is a lesson that today’s folks specializing in analytics could learn. Among these:

  • The importance of organizational culture
  • Leadership interest and support – especially for open discussion and experiment
  • Measurement (which would certainly provide grist for the analytics mill)
  • Systems thinking

For an illustration, see “Evidence in the learning organization” from the National Institutes of Health, in which these issues are focused on the medical profession.

If a marriage of the learning organization and knowledge management movements with the newer focus on analytics takes place, then all of those fields will improve and benefit.

© 2017 Norman Jacknis, All Rights Reserved

Horseless Carriages And Taxes

I noticed that the White House unveiled today its proposal for many changes in US taxes. I don’t normally comment on current political controversies and am not going to do so now, whatever my private views are on the policies.

But, as someone with an interest in 21st century technology, I did take notice of one thing about the proposal that I’ll comment on– admittedly something not as important as other aspects of the plan, but something that seems so outdated.

It is another example of how, for all the talk about technology and change, far too many people – especially public officials – are still subject to what the media expert Marshall McLuhan called the “horseless carriage syndrome”. When automobiles first getting popular a hundred years ago, they were seen as carriages with a motor instead of a horse. 

Only much later did everyone realize that the automobile made possible a different world, including massive suburbanization, increased mobility for all generations, McDonald’s and drive-ins (for a couple of decades anyway), etc. Cars were really more than motorized, instead of horse-driven, carriages.

Similarly, tech is more than the sometime automation of traditional ways of doing things. Which brings me back to taxes.

In pursuit of a goal of simplifying the tax system, the White House proposed today to reduce the number of tax brackets from seven to three. 
(This image is from the NY Times.)

And that brings me to a question I have previously asked: why do we still have these tables of brackets that determine how much income tax we’re supposed to pay?

The continued use of tax brackets is just another example of horseless carriage thinking by public officials because it perpetuates an outmoded and unnecessary way of doing things.

In addition to being backward, brackets cause distortions in the way people make economic decisions so as to avoid getting kicked in a higher tax bracket.

But we no longer have to live in a world limited to paper-based tables.  Assuming that we don’t go to a completely flat single percentage tax – and even the White House today doesn’t propose that – there is nothing in a progressive tax that should require the use of brackets. Instead, a simple system could be based on a formula which would eliminate the negative impacts of bracket-avoiding behavior that critics of progressive taxation point to.

And all it would to implement this is an app on our phones or the web. An app could the most basic flat tax formula, like “TaxOwed = m * TaxableIncome” where m is some percentage. It could also obviously handle more complicated versions for   progressive taxes, like logarithmic or exponential formulas.

No matter the formula, we’re not talking about much computing power nor a very complicated app to build. There are tens of thousands of coders who could finish this app in an afternoon.

Again, the reduction of tax brackets from 7 to 3 is not among the big issues of the proposed tax changes. But maybe we’d also get better tax policies on the big issues from both parties if public officials could also reform and modernize their thinking – and realize we’re all in the digital age now.

[OK, I’m off my soapbox now 😉 ]

© 2017 Norman Jacknis, All Rights Reserved

Eating Our Own Cooking

What do a doctor suggesting a patient diet, an IT person asking a fellow worker to start using a new application, a mom asking a child to eat something or an analytics expert asking a business person to trust statistical insights all have in common? They are trying to get other people to do things.

But as the old saying goes, too often, this suggestion comes across as “do as I say, not as I do.” We’ve heard other variations on this theme, such as “doctor, heal thyself” or my favorite, “eat your own cooking”.

image

When I was a CIO, I used to tease some of the staff by pointing out they were all too willing to tell everyone else that they should use some new technology to do their jobs, but when it came to new technology in the IT world, the IT staff was the most resistant to change.

Among the most important changes that need to happen in many organizations today are those based on new insights about the business from analytics. But in big organizations, it is difficult to know how well those necessary changes are being adopted.

Analytics can help with this problem too. Analytics tools can help to figure out what message about a change is getting across and how well is the change being adopted? In which offices, regions, kinds of people?

Yet, it is rare for analytics folks to use their tools to help guide their own success in getting analytics to be adopted. Here, though, are two examples, the first about individuals and the second about organizations as a whole.

Individual Willingness To Change

A couple of years ago, the Netherlands branch of Deloitte created a Change Adoption Profiler (CAP) model of their clients’ employees based on the willingness to adopt changes. As they describe it:

“Imagine being able to predict who will adopt change, and how they will adopt it before the change has even occurred. At Deloitte, we have developed a data driven decision making method called the Change Adoption Profiler – it provides insights into your company’s attitude toward change and allows you to address it head on.

“The CAP uses a diagnostic survey based on personal characteristic and change attitudes. Unlike traditional questionnaires CAP combines this with behavioral data to understand the profiles that exist within the organization. The CAP provides reliable, fact-based analytics – provides client insights that support smart decision making, reveals risks and signals how to approach change at an early stage.”

image

There is a nice little video that summarizes its work at https://www.youtube.com/watch?v=l12MQFCLoOs

Sadly, so far as I can tell from the public web, no other office of Deloitte is using this model in its work.

Organizational Analysis

Network analysis, especially social network analysis, is not a new focus of those in analytics. But, again, they don’t normally use network analysis to understand how well changes are being spread through an organization or business ecosystem.

One of the exceptions is the Danish change management consulting firm, Innovisor. They put particular emphasis on understanding the real organization – how and why people interact with each other – instead of relying solely or mostly on the official organization chart.

image

This little video explains their perspective – https://www.youtube.com/watch?v=ncXcvuSwXFM

In his blog post, Henry Ward, CEO of eShares, writes at some length about his company’s use of this network analysis to determine who were the real influencers in the organization. They ended up identifying 9 employees, not necessarily executives, who influenced 70% of all employees directly and 100% through a second connection. A detailed presentation can be found at https://esharesinc.box.com/shared/static/8rrdq4diy3kkbsyxq730ry8sdhfnep60.pdf

image

Given the value of these kinds of examples, the problem of not eating your own lunch is especially interesting in the analytics field. Perhaps it is new enough that many of its advocates still have the zeal of being part of an early, growing religion and can’t see why others might resist their insights. But they would be convincing if they could show how, with analytics, they did their own jobs better – the job of getting their organizations to achieve the potential of their insights.

A Guide Book For Newcomers To The Connected Countryside

In the 19th century, there was a movement to reject the then developing modern industrial society and live in nature as isolated individuals.
Henry David Thoreau’s “Walden; or, Life in the Woods” provides a bit of the flavor of that world view.

That’s clearly not our world. In the past several years, a massive global migration to cities has been part of the conventional wisdom and has been frequently cited in various media.

But less noticed, although as important, is the exodus by young professionals from cities to the countryside. In various workshops, presentations and blog posts, I’ve pointed out that while many rural areas are dying, others are flourishing.

These newcomers to the countryside are not going there to reject our technological world. Instead, the 21st century world and its advanced communications capabilities are what make it possible for them to live in a place where they prefer the quality of life. They are living in the countryside, but very much a part of the world.

A Stateline article, “Returning to the Exurbs: Rural Counties Are Fastest Growing”, highlights the relevant comments of Professor Joel Garreau, comparing this to the earlier exurbs of three decades ago:

“workers no longer need to be confined to one place. Professionals with certain skills could work and live where they wanted—and the world would come to them in the guise of a brown UPS truck … today’s urban exiles aren’t looking for a lengthy commute from the far suburbs to a downtown office. … They’re looking to completely eradicate the notion of commuting to work and toiling from 9 to 5.”

Moreover, this is a phenomenon in many technologically advanced nations, as noted four years ago in William van den Broek’s article, “Toward a digital urban exodus”.

One of the sites on the internet that documents the successes, challenges and hardships of these new residents is the Urban Exodus. It was created by one of them – Alissa Hessler, who moved from the high-tech center of Seattle to rural Maine several years ago. On the website, there are already stories of about 50 couples and individuals who made the move.

Recently, Hessler wrote what amounts to a thoughtful, practical and well-illustrated guide book based on what she has learned from her own and others’ experiences – “Ditch The City and Go Country: How to Master the Art of Rural Life From a Former City Dweller.”

This is required reading if you’re considering such a move full time, or even living in the countryside for 2, 3, 4 days a week.

If you’re a confirmed city dweller, it is still an interesting read. Especially the last two chapters on “earning a living” and “enjoying the good life” in the countryside can more sharply define the impact of technology and trends today than the cities that still are partly living off the inherited capital of the 20th century industrial age.

© 2017 Norman Jacknis, All Rights Reserved

An Active Tech Startup Scene — 6,500 Miles From Silicon Valley

Even though the Internet can connect people around the world, it’s surprising how little news in the tech community gets exchanged across national borders. I’m not referring to things like computer languages and algorithms, which the engineers exchange with each other – although mostly in the direction of north to south. Rather, it’s particularly knowledge of the business of tech and potential partners that doesn’t cross borders well.

Silicon Valley and North America is frequently covered in other parts of the world. And Americans will periodically get tech and entrepreneurial news from Europe and Japan and now China.

But there’s a big world out there that most people are unaware of who live in the northern hemisphere that includes North America or Europe or East Asia. Among the places where you might not expect a thriving tech scene is 6,500 miles from Silicon Valley, even further away than Beijing or Shanghai. That city is Buenos Aires, which I visited last month.

In Buenos Aires and throughout Latin America, there is almost a parallel universe of tech activity – except it is conducted mostly in Spanish which may be part of the reason it is less well known outside of Latin America (and perhaps tech centers like Barcelona, Spain).

image

There is so much entrepreneurial and tech activity going on in Buenos Aires that I was only able to sample a part of it. A good example is Startup Buenos Aires (SUBA).  By providing “community, education and resources”, SUBA aims to

“connect members locally and globally, while providing resources to grow a strong and sustainable startup ecosystem in Buenos Aires and around the world.”

Their calendar shows two or three events of interest to the tech and startup community every day.

image

Along with ten thousand other tech folks, I attended ExpoInternetLA, which declares that it is

“the biggest Business & Technology Event in Latin America, focused on B2B and M2M and many other technologies that are applied in every day and especially in business, virtually and/or physical. It is the first event of its kind in LATAM, where it promotes and stimulates the sector, businesses and investments that will influence the IoT & IoE in the region. During the 3 days of it, you can see innovation, new developments and releases as well as attend the conference program with top experts in the field, live different experiences offered by exhibitors and sponsors, make the business round and do networking at its best.”

image

Three days of presentations, that could have easily taken place in North America, covered a wide range of topic like Digital Transformation, IOT, biometrics and Bitcoin. And the exhibition area looked very similar to tech trade shows in the US, with the range of products and services you’d expect to see, except that the vendors were unfamiliar names, almost all from Latin America.

While ExpoInternet was conducted in Spanish, some of the presentations were in English and there were a large number of English speakers on the floor. Thus, they know what’s happening in English-speaking tech, although English-speaking tech may not know what’s happening here.

I also saw the two locations of AreaTres which calls itself “the meeting space of the Buenos Aires entrepreneurial ecosystem. Together with our partners, in the year 2015 we hosted 120 events focused on technology, innovation, design and entrepreneurship in which more than 100,000 people participated.”

image

Of the many events and meetups, one especially interesting to me was the local Digital Innovation Group with about a hundred in attendance (obviously including an out-of-towner, me). The presentation was on “Conversational UX: The Interface Dialog”, a topic you’d expect to have in Silicon Valley or New York. And the presentation was very much up to the current state of the art. It even ended with a demo of Albert The Bot as an interface to devices like Alexa Echo.

This particular meeting was hosted by at Solstice Consulting, a tech company that was started in Chicago, but set up an office in Buenos Aires to take advantage of the tech talent pool there and the time zone (only two hours difference at this time of year, compared to the 10-12 hour time differences with Asia).  

By the way, they’re not alone and it’s not just tech, but also a strong design community. R/GA, the award-winning digital ad/film/product firm headquartered in New York that describes itself as “combining creativity with the power of disruptive technology”, also has an office in the same district of Buenos Aires.  And near one of the two AreaTres locations, there is also a Design Center.

And the city government itself is quite sophisticated with excellent and innovative citizen-facing technology and support for all this private sector activity. Just one illustration when I was in town, on June 27th, the City government’s department for innovation ran InnovatiBA 2017, the fifth annual, all-day event for residents to “experience the future, today.”

Argentina’s economy has had its ups and downs recently.
Unfortunately, over the last hundred years or so, it has also fallen far
from its perch as one of the richest nations in the world.

Yet there is still considerable wealth here among some and there are professional and educational traditions that are still strong. That legacy, along with the entrepreneurship, hard work, innovation and tech skills that I witnessed, are all positive signs for the future here

 —  as the welcome mat to one of the tech offices
declares. 

image

© 2017 Norman Jacknis, All Rights Reserved @NormanJacknis

A Small-Town Tech Program That Enables People To Make A Living

There’s been lots of talk about our transition from an industrial manufacturing economy to a digital economy. Many people have been caught in this transition, just as many young farmers were caught in the transition to the industrial era and ended up filling the slums of rapidly growing cities more than a hundred years ago. While we see low earnings growth in cities and suburbs, this has been especially a problem in small towns and rural areas.

With all the talk about the issue, there’s been very little action considering the size of the problem – particularly impactful programs to help these folks. And those that do exist usually deal with part of the one problem, say training but not placement or the other way around. That problem is in part due to silos that have been created by our laws.

Nevertheless, there are some programs worth watching, expanding and emulating. Digital Works, a non-profit organization which currently operates in Kentucky, Michigan, Mississippi, New Mexico, Ohio and Texas is a good example.

image

This is not about creating more computer programming and other high-level jobs in big cities. Nor does it work on helping low-income people in areas with concentrations of traditional metropolitan city corporate employment, such as the successful Workforce Opportunity Services.

Instead, as you can tell from where they operate, Digital Works focuses on rural areas and small towns with high unemployment – the part of the economy that has been most left behind. As an example, one of their locations is Gallipolis, Ohio, about forty miles from the West Virginia border in southeastern Ohio. At its high point in 1960, almost 8,800 people lived there. The Census Bureau estimates there are fewer than 3,500 people there now.

These are also the places that require residents to travel the most to get to big (or even bigger) cities that have concentrated traditional employment in factories, offices and stores. So being able to make a living, by working digitally, in or near your home opens up all kinds of new economic opportunities.

Digital Works trains local people for contact center work that can be done anywhere there is sufficient Internet connectivity, either at home or at a work center. The goal is that the work pays better than minimum wage, with performance-based raises and promotional opportunities.

Digital Works handles the whole cycle that is necessary for the unemployed – recruitment, screening, training, placement, mentoring, development and retention. (It reminds me a bit of the transitional work programs for urban poor and ex-convicts that I helped run much earlier in my career.) They even work with their graduates to obtain the National Retail Foundation’s Customer Service Certification.

They will create remote work centers in partnership with local governments in those areas where broadband is not yet widely available. It’s worth noting that Digital Works is a subsidiary of Connected Nation, which itself is focused on increasing broadband deployment and adoption.

Digital Works is fulfilling the vision of the Internet as the foundation for expanded economic growth everywhere it can reach.

And, of course, to complete the circle, a large part of their effort is on developing relationships with companies that would pay the people to whom they are giving several weeks of training. These business relationships also ensure that the training provided is what employers are actually looking for – something that is often discussed in other training programs, but not so closely practiced as by Digital Works.

With a more global vision, one of the more interesting people to participate regularly in the Intelligent Community Forum’s activities is Stu Johnson, who directs Digital Works. He has said:

“There’s no other workforce training program that offers what we do—it’s really groundbreaking. We are able to offer employer-customized training to high-quality candidates, job-placement assistance, on-going mentorship, and even advanced training and career development. There is an excessive demand for these types of jobs, and Digital Works is connecting those employers with eager and trained job seekers.”

image

It is hard to find external studies of programs like this, which operate with little overhead in areas of the country that don’t much national attention. But Diane Rekowski, Executive Director of the Northeast Michigan Council of Governments has noted that

“The best part about this program is that it is free to anyone, and the success in Ohio has shown a 97% placement rate in a paid job upon completion of the training. Whether you are a recent high school graduate or enjoying your retirement years, this is an opportunity to have a flexible career and potential for earning much more than minimum wage.”

Digital Works’ data shows that the program has an 85% graduation rate and that 91% of their placements retain employment for more than a year.

While this program won’t work for everyone, everywhere and it certainly isn’t turning its graduates into millionaires, it is the kind of thing that can make a tremendous difference in real lives as this video shows – https://vimeo.com/150681091 

© 2017 Norman Jacknis, All Rights Reserved @NormanJacknis

The AI-Enhanced Library

I was asked to speak at the Cambridge (MA) Public Library’s annual staff development day a week ago Friday. Cambridge is one of the great centers of artificial intelligence (AI), machine learning and robotics research. So, naturally, my talk was in part about AI and libraries, going back to another post a year ago, titled “Free The Library”.

First, I put this in context – the increasing percentage of people who are part of the knowledge economy and have a continual need to get information, way beyond their one-time college experience. They are often dependent on googling what they need with increasingly frustrating results, in part because of Google’s dependence on advertising and keeping its users’ eye on the screen, rather than giving them the answers they need as quickly as possible.

Second, I reviewed the various ways that AI is growing and is not just robots handling “low level” jobs. Despite the portrait of AI’s impact as only affecting non-creative work, I gave examples of Ross “the AI lawyer”, computer-created painting, music and political speeches. I showed examples of social robots with understanding of human emotions and the kind of technologies that make this possible, such as RealEyes and Affectiva (another company near Cambridge).

Third, I pointed out how these same capabilities can be useful in enhancing library services.

This isn’t really new if you think about it. Libraries have always invented ways to organize information for people, but the need now goes beyond paper publications. Since their profession began, librarians have been in the business of helping people find exactly what they need, but the need is greater now.

Librarians have many skills to add to the task of “organizing the world’s information, and making it universally accessible”. But as non-profit organizations interested in the public good, libraries can also ensure that the next generation of knowledge tools – surpassing Google search – is developed for non-commercial purposes.

Not surprisingly, of course, the immediate reaction of many librarians is that we don’t have the money or resources for that!!

I reminded them of the important observation by Maureen Sullivan, former President of the American Library Association:

“With a nationally networked platform, library and other leaders will also have more capacity to think about the work they can do at the national level that so many libraries have been so effective at doing at the state and local levels.”

Thus, together, libraries can develop new and appropriate knowledge tools.  Moreover, they can – and should cooperate – in a way that is usually off-limits to private companies who want to protect their “intellectual property.”

And the numbers are impressive. If the 70,000+ librarians and 150,000+ other library staff in the USA worked together, they could accomplish an enormous amount. Individual librarians could specialize in particular subjects, but be available to patrons everywhere.

And if they worked with academic specialists in artificial intelligence in their nearby universities, such as MIT and Harvard in the case of Cambridge Public Library, they can help lead, not merely react quickly, to the future in a knowledge century.

The issue is not “either AI or libraries” but both reinforcing each other in the interest of providing the best service to patrons. Instead of being purely AI, artificial intelligence, this new service would what is beginning to be a new buzzword – IA, intelligence augmentation for human beings.

Nor am I the only one talking this way. Consider just three examples of articles in March and May of this year that come from different worlds, but make similar points about this proposed marriage:

I ended my presentation with a call for action and a reminder that this is a second – perhaps last – chance for libraries. The first warning and call to action came more than twenty years ago in 1995 at the 61st IFLA General Conference. It came from Chris Batt, then of the Croydon Libraries, Museum and Arts in the UK:

“What are the implications of all this [advancing digital world] for the future of public libraries? … The answer is that while we cannot be certain about the future for our services, we can and should be developing a vision which encompasses and enriches the potential of the Internet. If we do not do that, then others will; and they will do it less well.”

© 2017 Norman Jacknis, All Rights Reserved  @NormanJacknis

Surprises And Insights As Intelligent Community Leaders Meet Again

Returning to New York City, the Intelligent Community Forum (ICF) held its Annual Summit last week. Many of the ICF’s 160+ communities from around the world were represented, in addition to speakers and guests from this year’s Top7:

Although two years ago, the Intelligent Community of the Year was Columbus, Ohio, it’s noteworthy that this year no American city or community made it to the Top7. (This year, Rochester, New York, was the only American city even in the Smart21.)

image

In addition to the contest, which attracts much interest, the Summit is also a place where people meet and present ideas on how to best use information and communications technologies as a foundation for creating better communities and quality of life.

It’s in the workshops and presentations from speakers who do not represent contestants that often the most interesting insights arise. This post will highlight some of those more unexpected moments.

1.      First, there’s the Digital Government Society (DGS) of academic specialists in e-government, the Internet and citizen engagement. DGS also held its 18th Annual International Conference on Digital Government Research, dg.o 2017, last week. Its theme was “Innovations and Transformations in Government”.

image

Since ICF had a couple hundred government innovators in attendance and DGS is particularly interested in dialogue between academic researchers and practitioners, it was only natural that the two groups took a day last week to have a joint conference. This kind of interaction about areas of common interest was valuable for both groups. They even participated together in prioritizing the challenges facing the communities they lead or have studied.

2.     As part of the program, there were some presentations by companies with their own perspective on intelligent communities. Perhaps the most unusual example was Nathaniel Dick of Hair O’Right International Corp., which is an extremely eco-friendly beauty products company, best known for its caffeine shampoo.  It won’t surprise you that Mr. Dick is a very earnest, entrepreneurial American. What may surprise you is that he and Hair O’Right are based in Taiwan.

image

3.     While there has been much talk about government opening up its data to the public over the last several years, there haven’t been all that many really interesting applications – and perhaps too many cases where open data is used merely for “gotcha” purposes against office holders. But Yale Fox of RentLogic demonstrated a very useful application of open data, helping renters learn some of the hidden aspects of the apartments they are considering.

Starting with the nation’s biggest rental market, New York City, they pulled together a variety of public documents about the apartment buildings. Now all a potential renter needs to do is enter the address and this information will be available.  Wouldn’t you like to know about anything from a mold problem to frequent turnover of ownership before you moved in?

image

4.     Rob McCann, CEO of ClearCable and one of Canada’s thought leaders on broadband, provided a down to earth review of the current state of broadband deployment and how to address the demand for its expansion.  He noted Five Hidden Network Truths: “The Network is Oversubscribed (so manage it); The Network is Not Symmetrical (accept the best you can); Consumption CAGR exceeds 50% (so prepare for growth); More Peers are better than Bigger Peers; Operating costs are key, not just build costs”.

To get the point across, he also showed an old, but funny, video about Internet congestion.

5.     One of the new communities to join the summit was the relatively new area of Binh Duong, Vietnam (population more than a million).  Dr. Viet-Long Nguyen, Director of their Smart City Office, even gave a keynote presentation

6.     We’ve all heard much about the Internet of Things, the various sensors and devices that are supposed to change urban life – and, unfortunately, the sum of what too many people consider to be smart cities. By contrast, Mary Lee Kennedy, my fellow board member from the Metropolitan New York Library Council led a panel on “The Internet of People”. With panelists with their own perspectives – Mozilla Leadership Network, Pew Research, the Urban Libraries Council, New York Hall of Science – the focus was on the people who are not yet benefiting from everything else we were hearing about that day at MIST, Harlem’s own tech center. (You can read her more detailed post here.)

Overall, the impression a visitor to the summit is left with is that many places you wouldn’t have thought of are rapidly developing their technology potential and, more important, its value for their residents.

Next year, we expect to learn more and be surprised a few times more as the ICF Summit will be held in London and other parts of England.

© 2017 Norman Jacknis, All Rights Reserved @NormanJacknis

The Next Level: Communities That Learn

This week is the annual summit of the Intelligent Community Forum, where I’m Senior Fellow. Although there are workshops and meetings of the more than 140 intelligent communities from every continent, the events that draws the most attention are the discussions with the Top7 of the year and the ultimate winner.

image

These intelligent communities are leaders in using information technology and broadband communications for community and economic development. They represent the next level up from those cities which label themselves “smart” because of their purchases of products from various tech companies to manage the infrastructure of their cities – like street light management.

But intelligent communities should not to be satisfied with merely going
beyond vendor-driven “smart city” talk and they should instead
ascend

to the next level – create a community that is always
learning.

For a bit of background, consider the efforts over the last two decades to create learning organizations – companies, non-profits and government agencies that are trying to continuously learn what’s happening in their markets or service areas.

The same idea applies to less structured organizations, like the community of people who live and/or work in a city.

It’s worth noting, that unlike many of the big data projects in cities, this is not a top-down exercise by experts. It’s about everyone engaging in the process of learning new insights about where they live and work. That volunteer effort also makes it feasible for cash-starved local governments to consider initiating this kind of project.

In this sense, this is another manifestation of the citizen science movement around the world. Zooniverse, with more than a million volunteer citizen scientists, is probably the best example. Think Zooniverse for urban big data.

There are other examples in which people collect and analyze data. Geo-Wiki’s motto is “Engaging Citizens in Environmental Monitoring.” There’s also the Air Quality Egg, a “community-led air quality sensing network that gives people a way to participate in the conversation about air quality.”

image

Similarly, there’s the Smart Citizen project to create “open source technology for citizens political participation in smarter cities”, that was developed in Fab Lab Barcelona. The Sensible City Lab at MIT has even equipped a car for environmental and traffic safety sensing.

Drones are already used for environmental sensing in rural areas, but as they become a bit safer and their flight times (i.e., batteries) get better, they will be able to stay up longer for real time data collection in cities. A small company in Quebec City, DroneXperts, is already making use of drones in urban areas.

Indeed, as each day goes by, there is more and more data about life in our communities that could be part of this citizen science effort — and not just environmental data.

Obviously, a city’s own data, on all kinds of topics and from all kinds of data collection sensors, is a part of the mix.  City government’s even have information they are not aware of. Placemeter, for example, can use “public video feeds and computer vision algorithms to create a real-time data layer about places, streets, and neighborhoods.”   

There are non-governmental sources of data, like Waze’s Connected Citizens exchange for automobile traffic are also available.

Sentiment
data from social media feeds is another source. Even data from
individual residents could be made available (on an anonymous basis)
from their various personal tracking devices, like Fitbit. For
background, see John Lynch’s talk a year ago on “From Quantified Self to Quantified City”.

Naturally,
all of this data about a community can be an exciting part of public
school classes on science, math and even social studies and the arts.
Learning will become more relevant to the students since they will be
focused on the place in which they live. Students could communicate and
collaborate with each other in the same or separate classrooms or across
the country and the world.

The Cities of Learning
projects that started in Chicago a couple of years ago, which were
primarily about opening up cultural and intellectual institutions
outside of the classroom for K-12 students, were good, but different
from this idea.

So “communities that learn” is not just for students. It is a way for adult residents to achieve Jane Jacob’s vision
of a vibrant, democratic community, but with much more powerful and
insightful 21st century means than were available to her and her
neighbors decades ago.

© 2017 Norman Jacknis, All Rights Reserved @NormanJacknis

Campaign Analytics: What Separates The Good From The Bad

Donald Trump, as a candidate for President last year, expressed great skepticism about the use of analytics
in an election campaign.  Hillary Clinton made a big deal about her campaign’s use of analytics. Before that, President Obama’s campaigns received great credit for their analytics.

If you compare these experiences, you can begin to understand what separates good from bad in campaign analytics.

Let’s start with the Clinton campaign, whose use of analytics was breathlessly reported, including this Politico story about “Hillary’s Nerd Squad” eighteen months before the election.

However, a newly released book, titled Shattered, provides a kind of autopsy of the campaign and its major weaknesses. A CBS News review of the book highlighted this
weakness in particular:

“Campaign manager Robby Mook put a lot of faith in the campaign’s computer algorithm, Ada, which was supposed to give them a leg up in turning out likely voters. But the Clinton campaign’s use of the highly complex algorithm focused on ensuring voter turnout, rather than attracting voters from across party lines.

“According to the book, Mook was insistent that the software would be revered as the campaign’s secret weapon once Clinton won the White House. With his commitment to Ada and the provided data analytics, Mook often butted heads with Democratic Party officials, who were concerned about the lack of attention in persuading undecided voters in Clinton’s favor.  Those Democratic officials, as it turned out, had a point.”

image

Of course, this had become part of the conventional wisdom since the day after the election. For example, on November 9, 2016, the Washington Post had a story “Clinton’s data-driven campaign relied heavily on an algorithm named Ada. What didn’t she see?”:

“Ada is a complex computer algorithm that the campaign was prepared to publicly unveil after the election as its invisible guiding hand … the algorithm was said to play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads … The campaign’s deployment of other resources — including county-level campaign offices and the staging of high-profile concerts with stars like Jay Z and Beyoncé — was largely dependent on Ada’s work, as well.”

But the story had another point about Ada:

“Like the candidate herself, she had a penchant for secrecy and a private server … the particulars of Ada’s work were kept under tight wraps, according to aides. The algorithm operated on a separate computer server than the rest of the Clinton operation as a security precaution, and only a few senior aides were able to access it.”

While the algorithm clearly wasn’t the only or perhaps even the most important reason for the failure of the campaign, that last piece illustrates why the Clinton use of analytics wasn’t more successful. It had in common with many other failed analytics initiatives an atmosphere of secretiveness and arrogance – “we’re the smartest guys around here” so let us do our thing.

The successful uses of analytics in campaigns or elsewhere try to use (and then test) the best insights of the people with long experience in a field. They will even help the analyst look at the right questions –
in the case of the Clinton campaign, converting undecided voters

The best analytics efforts are a two-way conversation that helps the “experts” to understand better which of their beliefs are still correct and helps the analytics staff to understand where they should be looking for predictive factors.

Again, analytics wasn’t the only factor that led to President Obama’s winning elections in 2008 and 2012, but the Obama campaign’s use of analytics felt different than Clinton’s. One article went “Inside the Obama Campaign’s Big Data Analytics Culture” and described “an archetypical story of an analytics-driven organization that aligned people, business processes and technologies around a clear mission” instead of focusing on the secret sauce and a top-down, often strife-filled, environment.

image

InfoWorld’s story about the 2012 campaign described a widely dispersed use of analytics –

“Of the 100 analytics staffers, 50 worked in a dedicated analytics department, 20 analysts were spread throughout the campaign’s various headquarters, and another 30 were in the field interpreting the data.” So, there was plenty of opportunity for analytics staffers to learn from others in the campaign.

And the organizational culture was molded to make this successful as well –

“barriers between disparate data sets – as well as between analysts – were lowered, so everyone could work together effectively. In a nutshell, the campaign sought a friction-free analytic environment.”

Obama’s successful use of analytics was a wake-up call to many politicians, Hillary Clinton included. But did they learn all the lessons of his success? Apparently not.

Coming back to the 2016 election, there is then the Trump campaign. Despite the candidate’s statements, his campaign also used analytics, employing Cambridge Analytica, the British firm that helped the Brexit forces to win in the UK. Thus, 2016 wasn’t as much of a test of analytics vs. no analytics as has sometimes been reported.

image

But, if an article, “The great British Brexit robbery: how our democracy was hijacked”, published two weeks ago in the British newspaper, the Guardian, is even close to the mark, there is a different question about the good and bad uses of analytics in both the Trump and Brexit campaigns. In part scary and perhaps in others too jaundiced, this story raises questions for the future – as analytic tools get better, will the people using those tools realize they face not only technical challenges.

The good and bad use of analytics will not just be a question as to whether the results are being executed well or poorly – whether the necessary changes and learning among all members of an organization take place. But it will also be a question whether analytics tools are being used in ways that are good or bad in an ethical sense.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

Analytics And Leading Change

Next week, I’m teaching the summer semester version of my Columbia University course called Analytics and Leading Change for the Master’s Degree program in Applied Analytics. While there are elective courses on change management in business and public administration schools, this combination of analytics and change is unusual. The course is also a requirement. Naturally, I’ve been why?

The general answer is that analytics and change are intertwined.

Successfully introducing analytics into an organization shares all the difficulties of introducing any new technology, but more so. The impact of analytics – if successful – requires change, often deep change that can challenge the way that executives have long thought about the effect of what they were doing.

As a result, often the reaction to new analytics insights can be a kneejerk rejection, as one Forbes columnist asked last year in an article titled “Why Do We Frequently Question Data But Not Assumptions?”.

A good, but early example of the impact of what we now call “big data”, goes back twenty-five years ago to the days before downloaded music.

Back then, the top 40 selections of music on the “air” were based on what radio DJs (or program directors) chose and, beyond that, the best information about market trends came from surveys of ad hoc observations by record store clerks.  Those choices too emphasized new mainstream rock and pop music.

In 1991, in one of the earliest big data efforts in retail, a new company, SoundScan, came along and collected data from automated sales registers in music. What they found went against the view of the world that was then widely accepted – 
and instead

old music, like Frank Sinatra, and genres others than rock were very popular.

Music industry executives then had to change the way they thought about the market and many of them didn’t. This would happen again when streaming music came along. (For more on this bit of big data history, see https://en.wikipedia.org/wiki/Nielsen_SoundScan and http://articles.latimes.com/1991-12-08/entertainment/ca-85_1_sales-figures .)

A somewhat more recent example is the way that insights from analytics have challenged some of the traditional assumptions about motivation that are held by many executives and many staff in corporate human resource departments. Tom Davenport’s Harvard Business Review article in 2010 on “Competing on Talent Analytics” provides a good review of what can be learned, if executives are willing to learn from analytics.

The first, larger lesson is: If the leaders of analytics initiatives don’t understand the nature of the changes they are asking of their colleagues, then those efforts will end up being nice research reports and the wonderful insights generated by the analysts will disappear without impact or benefit to their organizations.

The other side of the coin and the second reason that analytics and change leadership are intertwined is a more positive one. Analytics leaders have a potential advantage over other “change agents” in understanding how to change an organization. They can use analytics tools to understand what they’re dealing with and thus increase the likelihood that the change will stick.

For instance, with the rise of social networks on the internet, network analytics methods have developed to understand how the individuals in a large group of people influence each other. Isn’t that also an issue in understanding the informal, perhaps the real, structure of an organization which the traditional organization charts don’t illuminate?

In another, if imperfect example, the Netherlands office of Deloitte created a Change Adoption Profiler to help leaders figure out the different reactions of people to proposed changes.

Unfortunately, leaders of analytics in many organizations too infrequently use their own tools to learn what they need to do and how well they are doing it. Pick your motto about this – “eat your own lunch (or dogfood)” or “doctor heal thyself” or whatever – but you get the point.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

Augmented Reality Rising

Last week, I gave a presentation at the Premier CIO Summit in Connecticut on the Future of User Interaction With Technology, especially the combined effects of developments in communicating without a keyboard, augmented reality (AR) and machine learning.  I’ve been interested in this for some time and have written about AR as part of the Wearables movement and what I call EyeTech.

First, it would help to distinguish these digital realities. In virtual reality, a person is placed in a completely virtual world, eyes fully covered by a VR headset – it’s 100% digital immersion. It is ideal for games, space exploration, and movies, among other yet to be created uses.

With augmented reality, there is a digital layer that is added onto the real physical world. People look through a device – a smartphone, special glasses and the like – that still lets them see the real things in front of them.

Some experts make a further distinction by talking about mixed reality in which that digital layer enables people to control things in the physical environment. But again, people can still see and navigate through that physical environment.

When augmented was first made possible, especially with smartphones, there were a variety of interesting but not widespread uses. A good example is the way that some locations could show the history of what happened in a building a long time ago, so-called “thick-mapping”.

There were business cards that could popup an introduction and a variety of ancillary information that can’t fit on a card, as in this video.

There were online catalogs that enabled consumers to see how a product would fit in their homes. These videos from Augment and Ikea are good examples of what’s been done in AR.

A few years later, now, this audience was very interested in learning about and seeing what’s going on with augmented reality. And why not? After a long time under the radar or in the shadow of Virtual Reality hype, there is an acceleration of interest in augmented (and mixed) reality.

Although it was easy to satirize the players in last year’s Pokémon Go craze, that phenomenon brought renewed attention to augmented reality via smart phones.

Just in the last couple of weeks, Mark Zuckerberg at the annual Facebook developers conference stated that he thinks augmented reality is going to have tremendous impact and he wants to build the ecosystem for it. See https://www.nytimes.com/2017/04/18/technology/mark-zuckerberg-sees-augmented-reality-ecosystem-in-facebook.html

As beginning of the article puts it:

“Facebook’s chief executive, Mark Zuckerberg, has long rued the day that Apple and Google beat him to building smartphones, which now underpin many people’s digital lives. Ever since, he has searched for the next frontier of modern computing and how to be a part of it from the start.

“Now, Mr. Zuckerberg is betting he has found it: the real world. On Tuesday, Mr. Zuckerberg introduced what he positioned as the first mainstream augmented reality platform, a way for people to view and digitally manipulate the physical world around them through the lens of their smartphone cameras.”

And shortly before that, an industry group – UI LABS and The Augmented Reality for Enterprise Alliance (AREA) – united to plot the direction and standards for augmented reality, especially now that the applications are taking off inside factories, warehouses and offices, as much as in the consumer market. See http://www.uilabs.org/press/manufacturers-unite-to-shape-the-future-of-augmented-reality/

Of course, HoloLens from Microsoft continues to provide all kinds of fascinating uses of augmented reality as these examples from a medical school or field service show.

Looking a bit further down the road, the trend that will make this all the more impactful for CIOs and other IT leaders is how advances in artificial intelligence (even affective computing), the Internet of Things and analytics will provide a much deeper digital layer that will truly augment reality. This then becomes part of a whole new way of interacting with and benefiting from technology.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

Interactivity For An Urban Digital Experience

This is the third and last of a series of posts about a new urban digital experience in the streets of Yonkers, New York. [You can the previous posts, click on part1 and part2.]

As a reminder, the two main goals of this project are:

  • To enhance the street life of the city by offering delightful destinations and interesting experience, a new kind of urban design
  • To engage, entertain, educate and reinforce the image of Yonkers as an historic center of innovation and to inspire the creativity of its current residents

We started out with a wide variety of content that entertains, educates and reinforces the residents’ understanding of their city. As the City government takes over full control of this, the next phase will be about deepening the engagement and interactivity with pedestrians – what will really make this a new tool of urban design.

This post is devoted to just a few of the possible ways that a digital experience on the streets can become more interactive.

First, a note about equipment and software. I’ve mentioned the high-quality HD projectors and outdoor speakers. I haven’t mentioned the cameras that are also installed. Those cameras have been used so far to make sure that the system is operating properly. But the best use of cameras is as one part of seeing – and with the proper software – analyzing what people are doing when they see the projections or hear something.

The smartphones that people carry as they pass by also allow them to communicate via websites, social media or even their movement.

With all this in place, it helps to think of what can happen in these four categories:

  1. Contests
  2. Control of Text
  3. Physical Interaction
  4. Teleportation

Contests

What’s your favorite part of the city? Show a dozen or so pictures and let people vote on them – and show real time results. It’s not a deeply significant engagement, but it will bring people out to show support for their area or destination.

Or people can be asked: what are your top choices in an amateur poetry contest (which only requires audio) or the best photography of the waterfront or a beautiful park or the favorite item that has been 3D printed inside the library’s makerspace? Or???

Even the content itself can be assessed in this way. We can ask passersby to provide thumbs up or down for what it is they are seeing at that moment. (Since the schedule of content is known precisely this means that we would also know what the person was referring to.)

People could vote on what kind of music they would want to hear at the moment, like an outdoor jukebox, or on what videos they might want to see at the moment.

Contests of this kind are a pretty straightforward use of either smartphones or physical gestures. Cameras can detect when people point to something to make a choice. It is possible to use phone SMS texting to register votes and the nice thing about this use of SMS is that it doesn’t require anyone to edit and censor what people write since they can only select among the (usually numerical) choices they’re given. SMS voting can be supplemented with voting on a website.

Control Of Text

Control implies that the person in front of a site can control what’s there merely by typing some text on a smart phone – or eventually by speaking to a microphone that is backed by speech recognition software.

People can ask about the history of people who have moved to Yonkers by typing in a family name, which then triggers an app that searches the local family database.

This kind of interaction requires that someone or a service provides basic editing of the text provided by people (i.e., censorship of words and ideas not appropriate for a site frequented by the general public).

Physical Interaction

With software that can understand or at least react to the movement of human hands, feet and bodies, there are all kinds of possible ways that people can interact with a blended physical/digital environment.

In a place like Getty Square where the projectors point down to the ground, it’s possible to show dance steps. Or people can modify an animation or visual on a wall by waving their arms in a particular way.

Originally in Australia, but now elsewhere, stairs have been digitized so that they play musical notes when people walk on them. These “piano stairs” are relatively easy to create and actually don’t really need to be stairs at all – the same effect can be created on a flat surface and it doesn’t have to generate piano sounds only.

In Eindhoven, the Netherlands, there is an installation called Lightfall, where a person’s movements control the lighting. See https://vimeo.com/192203302

Pedestrians could even become part of the visual on a wall and using augmented reality even transformed, say into the founder of the city with appropriate old clothes. Again, the only limit is the creativity of those involved in designing these opportunities.

Teleportation

The last category I’m calling teleportation, although it’s not really what we’ve seen in Star Trek. Instead with cameras, microphones, speakers and screens in one city and a companion setup in another, it would be possible for people in both places to casually chat as if they were on neighboring benches in the same park.

In this way, the blending of the physical and digital provides the residents with a “window” to another city.

I hope this three-part series has given city leaders and others who care about the urban environment as good sense of how to make 21st blended environments, how they might start with available content and then go beyond that to interaction with people walking by.

Of course, even three blog posts are limited, so feel free to contact me @NormanJacknis for more information and questions.

© 2017 Norman Jacknis, All Rights Reserved