Are You Looking At The Wrong Part Of The Problem?

In business, we are frequently told that to build a successful company we have to find an answer to the customer’s problem. In government, the equivalent guidance to public officials is to solve the problems faced by constituents. This is good guidance, as far as it goes, except that we need to know what the problem really is before we can solve it.

Before those of us who are results-oriented, problem solvers jump into action, we need to make sure that we are looking at the right part of the problem. And that’s what Dan Heath’s new book, “Upstream: The Quest To Solve Problems Before They Happen” is all about.

Heath, along with his brother Chip, has brought us such useful books as “Made To Stick: Why Some Ideas Survive and Others Die” and “Switch: How to Change Things When Change Is Hard”.

As usual for a Heath book, it is well written and down to earth, but contains important concepts and research underneath the accessible writing.

He starts with a horrendous, if memorable, story about kids:

You and a friend are having a picnic by the side of a river. Suddenly you hear a shout from the direction of the water — a child is drowning. Without thinking, you both dive in, grab the child, and swim to shore. Before you can recover, you hear another child cry for help. You and your friend jump back in the river to rescue her as well. Then another struggling child drifts into sight…and another…and another. The two of you can barely keep up. Suddenly, you see your friend wading out of the water, seeming to leave you alone. “Where are you going?” you demand. Your friend answers, “I’m going upstream to tackle the guy who’s throwing all these kids in the water.”

 

Going upstream is necessary to solve the problem at its origin — hence the name of the book. The examples in the book range from important public, governmental problems to the problems of mid-sized businesses. While the most dramatic examples are about saving lives, the book is also useful for the less dramatic situations in business.

Heath’s theme is strongly, but politely, stated:

“So often we find ourselves reacting to problems, putting out fires, dealing with emergencies. We should shift our attention to preventing them.”

This reminds me of a less delicate reaction to this advice: “When you’re up to your waist in alligators, it’s hard to find time to drain the swamp”. And I often told my staff that unless you took some time to start draining the swamp, you are always going to be up to your waist in alligators.”

He elaborates and then asks a big question:

We put out fires. We deal with emergencies. We stay downstream, handling one problem after another, but we never make our way upstream to fix the systems that caused the problems. Firefighters extinguish flames in burning buildings, doctors treat patients with chronic illnesses, and call-center reps address customer complaints. But many fires, chronic illnesses, and customer complaints are preventable. So why do our efforts skew so heavily toward reaction rather than prevention?

His answer is that, in part, organizations have been designed to react — what I called some time ago the “inbox-outbox” view of a job. Get a problem, solve it, and then move to the next problem in the inbox.

Heath identifies three causes that lead people to focus downstream, not upstream where the real problem is.

  • Problem Blindness — “I don’t see the problem.”
  • A Lack of Ownership — “The problem isn’t mine to fix.”
  • Tunneling — “I can’t deal with the problem right now.”

In turn, these three primary causes lead to and are reinforced by a fatalistic attitude that bad things will happen and there is nothing you can do about that.

Ironically, success in fixing a problem downstream is often a mark of heroic achievement. Perhaps for that reason, people will jump in to own the emergency downstream, but there are fewer owners of the problem upstream.

…reactive efforts succeed when problems happen and they’re fixed. Preventive efforts succeed when nothing happens. Those who prevent problems get less recognition than those who “save the day” when the problem explodes in everyone’s faces.

Consider the all too common current retrospective on the Y2K problem. Since the problem didn’t turn out to be the disaster it could have been at the turn of the year 2000, some people have decided it wasn’t real after all. It was, but the issue was dealt with upstream by massive correction and replacement of out-of-date software.

Heath realizes that it is not simple for a leader with an upstream orientation to solve the problem there, rather than wait for the disaster downstream.

He asks leaders to first think about seven questions, which explores through many cases:

  • How will you get early warning of the problem?
  • How will you unite the right people to assess and solve the problem?
  • Where can you find a point of leverage?
  • Who will pay for what does not happen?
  • How will you change the system?
  • How will you know you’re succeeding?
  • How will you avoid doing harm?

Some of these questions and an understanding of what the upstream problem really is can start to be answered by the intelligent use of analytics. That too only complicates the issue for leaders, since an instinctive heroic reaction is much sexier than contemplating machine learning models and sexy usually beats out wisdom 🙂

Eventually Heath makes the argument that not only do we often focus on the wrong end of the problem, but that we think about the problem too simplistically. At that point in his argument, he introduces the necessity of systems thinking because, especially upstream, you may find a set of interrelated factors and not a simple one-way stream.

[To be continued in the next post.]

© 2020 Norman Jacknis, All Rights Reserved

Technology and Trust

A couple of weeks ago, along with the Intelligent Community Forum (ICF) co-founder, Robert Bell, I had the opportunity to be in a two-day discussion with the leaders of Tallinn, Estonia — via Zoom, of course. As part of ICF’s annual selection process for the most intelligent community of the year, the focus was on how and why they became an intelligent community.

They are doing many interesting things with technology both for e-government as well as more generally for the quality of life of their residents. One of their accomplishments, in particular, has laid the foundation for a few others — the strong digital identities (and associated digital signatures) that the Estonian government provides to their citizens. Among other things, this enables paperless city government transactions and interactions, online elections, COVID contact warnings along with protection/tracking of the use of personal data.

Most of the rest of the world, including the US, does not have strong, government-issued digital identities. The substitutes for that don’t come close — showing a driver’s license at a store in the US or using some third party logon.

Digital identities have also enabled an E-Residency program for non-Estonians, now used by more than 70,000 people around the world.

As they describe it, in this “new digital nation … E-Residency enables digital entrepreneurs to start and manage an EU-based company online … [with] a government-issued digital identity and status that provides access to Estonia’s transparent digital business environment”

This has also encouraged local economic growth because, as they say, “E-Residency allows digital entrepreneurs to manage business from anywhere, entirely online … to choose from a variety of trusted service providers that offer easy solutions for remote business administration.” The Tallinn city leaders also attribute the strength of a local innovation and startup ecosystem to this gathering of talent from around the world.

All this would be a great story, unusual in practice, although not unheard of in discussions among technologists — including this one. As impressive as that is, it was not what stood out most strongly in the discussion which was Tallinn’s unconventional perspective on the important issue of trust.

Trust among people is a well-known foundation for society and government in general. It is also essential for those who wish to lead change, especially the kind of changes that result from the innovations we are creating in this century.

I often hear various solutions to the problem of establishing trust through the use of better technology — in other words, the belief that technology can build trust.

In Tallinn’s successful experience with technology, cause-and-effect go more in the opposite direction. In Tallinn, successful technology is built on trust among people that had existed and is continually maintained regardless of technology.

While well-thought out good technology can also enhance trust to an extent, in Tallinn, trust comes first.

This is an important lesson to keep in mind for technologists who are going about changing the world and for government leaders who look on technology as some kind of magic wand.

More than once in our discussions, Tallinn’s leaders restated an old idea that preceded the birth of computers: few things are harder to earn and easier to lose than trust.

© 2020 Norman Jacknis, All Rights Reserved

Bitcoin & The New Freedom Of Monetary Policy

Every developing technology has the potential for unintended consequences.  Blockchain technology is an example.  Although there are many possible uses of blockchain as a generally trusted and useful distributed approach to storing data, its most visible application has been virtual or crypto-currencies, such as Bitcoin, Ethereum and Litecoin. These once-obscure crypto-currencies are on a collision course with another trend that in its own way is based on technology — mostly digital government-issued money.

Although there are many possible uses of blockchain as a generally trusted and useful distributed approach to storing data, its most visible application has been virtual or crypto-currencies, such as Bitcoin, Ethereum and Litecoin. These once-obscure crypto-currencies are on a collision course with another trend that in its own way is based on technology — mostly digital government-issued money.

In particular, another once-obscure idea about government money is also moving more into the mainstream — modern monetary theory (MMT), which I mentioned few weeks ago in my reference to Stephanie Kelton’s new book, “The Deficit Myth”. In doing a bit of follow up on the subject, I came across many articles that were critical of MMT. Some were from mainstream economists. Many more were from advocates of crypto-currencies, especially Bitcoiners.

Although I doubt that Professor Kelton would agree, many Bitcoiners feel that governments have been using MMT since the 1970s — merely printing money. They forget about the tax and policy stances that Kelton advocates.

Moreover, there is a significant difference in the attitude of public leaders when they think they are printing money versus borrowing it from large, powerful financial interests. James Carville, chief political strategist and guru for President Clinton famously said, “I used to think that if there was reincarnation, I wanted to come back as the president or the pope or as a .400 baseball hitter. But now I would like to come back as the bond market. You can intimidate everybody.”

For Bitcoiners, the battle is drawn and they do not like MMT. Here is just a sample of the headlines from the last year or so:

It is worth noting that MMT raises very challenging issues of governance. Who decides how much currency to issue? Who decides when there is too much currency? Who decides what government-issued money is spent on and to whom it goes? This is especially relevant in the US, where the central bank, the Federal Reserve, is at least in theory independent from elected leaders.

However, it also gives the government what may be a necessary tool to keep the economy moving during recessions, especially major downturns. Would a future dominated by cryptocurrencies, like Bitcoin, essentially tie the hands of the government in the face of an economic crisis? — just as the gold standard did during the Panic of 1893 and the Great Depression (until President Roosevelt suspended the convertibility of dollars into gold)?

This picture shows MMT as a faucet controlling the flow of money as the needs of the economy changes. If this were a picture of Bitcoin’s role, the faucet would be almost frozen, dripping a relatively fixed amount that is dependent upon Bitcoin mining.

Less often discussed is that cryptocurrencies, as a practical matter, also end up needing some governance. I am not going to get into the weeds on this, but you can start with “In Defense of Szabo’s Law, For a (Mostly) Non-Legal Crypto System”. The implication is that cryptocurrencies need some kind of rules and laws enforced by some people. Sounds like at least a little bit of government to me.

Putting that aside, if Bitcoin and/or other cryptocurrencies succeed in getting widespread adoption, then it would seem that they would limit the ability of governments to encourage or discourage economic growth through the issuance of money.

Of course, some officials do not seem to worry too much. This attitude is summed up in a European Parliament report, published in 2018.

Decentralised ledger technology has enabled cryptocurrencies to become a new form of money that is privately-issued, digital and that permits peer-to-peer transactions. However, the current volume of transactions in such cryptocurrencies is still too small to make them serious contenders to replace official currencies. 

Underlying this are two factors. First, cryptocurrencies do not perform the role of money well, because their value is very volatile and they are thus not very good stores of value. Second, cryptocurrencies are managed in ways that are very primitive compared to what modern currencies require.

These shortcomings might be corrected in the future to increase the popularity and reach of cryptocurrencies. However, those that manage currencies, in other words monetary policymakers, cannot be outside any societal system of checks and balances.

For cryptocurrencies to replace official money, they would have to conform to the institutional set up that monitors and evaluates those who have the power to manage money.

They do not seem to be too worried, do they? However, cryptocurrency might eventually derail the newfound freedom that government economic policy makers have realized they have through MMT.

As we have seen in the past, new technologies can suddenly grow very fast and blindside public officials. As Roy Amara, past president of The Institute for the Future, said, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”.

© 2020 Norman Jacknis, All Rights Reserved

The Second Wave Of Capital

I have been doing research about the future impact of artificial intelligence on the economy and the rest of our lives. With that in mind, I have been reading a variety of books by economists, technologists, and others.That is why I recently read “Capital and Ideology” by Thomas Piketty, the well-known French economist and author of the best-selling (if not well read) “Capital in the Twenty-First Century”. It contains a multi-national history of inequality, why it happened and why it has continued, mostly uninterrupted.

At more than 1100 pages, it is a tour de force of economics, history, politics and sociology. In considerable detail, for every proposition, he provides reasonable data analyses, which is why the book is so long. While there is a lot of additional detail in the book, many of the themes are not new, in part because of Piketty’s previous work.  As with his last book, much of the commentary on the new book is about income and wealth inequality.  This is obviously an important problem, although not one that I will discuss directly here.

Instead, although much of the focus of the book is on capital in the traditional sense of money and ownership of things, it was his two main observations about education – what economists call human capital – that stood out for me. The impact of a second wave and a second kind of capital is two-fold.

  1. Education And The US Economy

From the mid-nineteenth century until about a hundred years later, the American population had twice the educational level of people in Europe. And this was exactly the same period that the American economy surpassed the economies of the leading European countries. During the last several decades, the American population has fallen behind in education and this is the same time that their incomes have stagnated.  It is obviously difficult to tease out the effect of one factor like education, but clearly there is a big hint in these trends.

As Piketty writes in Chapter 11:

The key point here is that America’s educational lead would continue through much of the twentieth century. In 1900–1910, when Europeans were just reaching the point of universal primary schooling, the United States was already well on the way to generalized secondary education. In fact, rates of secondary schooling, defined as the percentage of children ages 12–17 (boys and girls) attending secondary schools, reached 30 percent in 1920, 40–50 percent in the 1930s, and nearly 80 percent in the late 1950s and early 1960s. In other words, by the end of World War II, the United States had come close to universal secondary education.

At the same time, the secondary schooling rate was just 20–30 percent in the United Kingdom and France and 40 percent in Germany. In all three countries, it is not until the 1980s that one finds secondary schooling rates of 80 percent, which the United States had achieved in the early 1960s. In Japan, by contrast, the catch-up was more rapid: the secondary schooling rate attained 60 percent in the 1950s and climbed above 80 percent in the late 1960s and early 1970s.

In the second Industrial Revolution it became essential for growing numbers of workers to be able to read and write and participate in production processes that required basic scientific knowledge, the ability to understand technical manuals, and so on.

That is how, in the period 1880–1960—first the United States and then Germany and Japan, newcomers to the international scene—gradually took the lead over the United Kingdom and France in the new industrial sectors. In the late nineteenth and early twentieth centuries, the United Kingdom and France were too confident of their lead and their superior power to take the full measure of the new educational challenge.

How did the United States, which pioneered universal access to primary and secondary education and which, until the turn of the twentieth century, was significantly more egalitarian than Europe in terms of income and wealth distribution, become the most inegalitarian country in the developed world after 1980—to the point where the very foundations of its previous success are now in danger? We will discover that the country’s educational trajectory—most notably the fact that its entry into the era of higher education was accompanied by a particularly extreme form of educational stratification—played a central role in this change.

In any case, as recently as the 1950s inequality in the United States was close to or below what one found in a country like France, while its productivity (and therefore standard of living) was twice as high. By contrast, in the 2010s, the United States has become much more inegalitarian while its lead in productivity has totally disappeared.

  1. The Political Competition Between Two Elites

By now, most Americans who follow politics understand that the Democratic Party has become the favorite of the educated elite, in addition to the votes from minority groups. This coalition completely reverses what had been true of educated voters in most of the last century, who were reliable Republican voters. In the process, the Democratic Party has lost much of its working-class base.

The Republicans have been the party of the economic elite, although since the 1970s some of the working-class have joined in, especially those reacting to increased immigration and civil rights movements.

What Piketty points out is that, in this transition, working-class and lower income people have decreased their political participation, especially voting. He thinks that is because these voters felt that the Democratic Party has been taken over by the educational elite and no longer speaks for them.

What many Americans may not have realized is that this same phenomenon has happened in other economically advanced democracies, such as the UK and France. Over the longer run, Piketty wonders whether such an electoral competition between parties both dominated by elites can be sustained – or whether the voiceless will seek violence or other undemocratic outlets for their political frustrations.

In Chapter 14, he notes that, at the same time that the USA has lost the edge arising from a better educated population, it and other advanced economies that have now matched or surpassed the American educational level, have elevated education to a position of political power.

We come now to what is surely the most striking evolution in the long run; namely, the transformation of the party of workers into the party of the educated.

Before turning to explanations, it is important to emphasize that the reversal of the educational cleavage is a very general phenomenon. What is more, it is a complete reversal, visible at all levels of the educational hierarchy. we find exactly the same profile—the higher the level of education, the less likely the left-wing vote—in all elections in this period, in survey after survey, without exception, and regardless of the ambient political climate. Specifically, the 1956 profile is repeated in 1958, 1962, 1965, and 1967.

Not until the 1970s and 1980s does the shape of the profile begin to flatten and then gradually reverse. The new norm emerges with greater and greater clarity as we move into the 2000s and 2010s. With the end of Soviet communism and bipolar confrontations over private property, the expansion of educational opportunity, and the rise of the “Brahmin left,” the political-ideological landscape was totally transformed.

Within a few years the platforms of left-wing parties that had advocated nationalization (especially in the United Kingdom and France), much to the dismay of the self-employed, had disappeared without being replaced by any clear alternative.

A dual-elite system emerged, with on one side, a “Brahmin left,” which attracted the votes of the highly educated, and on the other side, a “merchant right,” which continued to win more support from both highly paid and wealthier votes.

This clearly provides some context for what we have been seeing in recent elections.  And although he is not the first to highlight this trend, the evidence that he marshals is impressive.

Considering how much there is in the book, it is not likely anyone, including me, would agree with all of the analysis. In addition to the analysis, Piketty goes on to propose various changes in taxation and laws, which I will discuss in the context of other writers in a later blog. For now, I would only add that other economists have come to some of the same suggestions as Piketty, although they have completed a very different journey from his.

For example, Daniel Susskind in The End Of Work is concerned that a large number of people will not be able to make a living through paid work because of artificial intelligence. The few who do get paid and those who own the robots and AI systems will become even richer at most everyone else becomes poorer. This blends with Piketty’s views and they end up in the same place – a basic citizen’s income and even a basic capital allotment to each citizen, taxation on wealth, estate taxes, and the like.

We will have much to explore about these and other policy issues arising from the byproducts of our technology revolution in this century.

© 2020 Norman Jacknis, All Rights Reserved

Are Computers Learning On Their Own?

To many people, the current applications of artificial intelligence, like your car being able to detect where a lane ends, seem magical. Although most of the significant advances in AI have been in supervised learning, it is the idea that the computer is making sense of the world on its own — unsupervised — which intrigues people more.

If you’ve read some about artificial intelligence, you may often see a distinction between supervised and unsupervised learning by machine. (There are other categories too, but these are the big two.)

In supervised learning, the machine is taught by humans what is right or wrong — for example, who did or did not default on a loan — and it eventually figures out what characteristics would best predict a default.

Another example is asking the computer to identify whether a picture shows a dog or a cat. In supervised learning, a person identifies each picture and then the computer figures out the best way to distinguish between each — perhaps whether the animal has floppy ears 😉

Even though the machine gets quite good at correctly doing this, the underlying model of the things that predict these results is also often opaque. Indeed, one of the hot issues in analytics and machine learning these days is how humans can uncover and almost “reverse engineer” the model the machine is using.

In unsupervised learning, the computer has to figure out for itself how to divide a group of pictures or events or whatever into various categories. Then the next step is for the human to figure out what those categories mean. Since it is subject to interpretation, there is no truly accurate and useful way to describe the categories, although people try. That’s how we get psychographic categories in marketing or equivalent labels, like “soccer moms”.

Sometimes the results are easy for humans to figure out, but not exactly earth shattering, like in this cartoon.

https://twitter.com/athena_schools/status/1063013435779223553

In the case of the computer that is given a set of pictures of cats and dogs to determine what might be the distinguishing characteristics, we (people) would hope that computer would figure out that there are dogs and cats. But it might instead classify them based on size — small animals and big animals — or based on the colors of the animals.

This is all sounds like it is unsupervised. Anything useful that the computer determines is thus part of the magic.

How Unsupervised Is Unsupervised Machine Learning?

Except, in some of the techniques of unsupervised learning, especially in cluster analysis, a person is asked to determine how many clusters or groups there might be. This too limits and supervises the learning by the machine. (Think about how much easier it is to be right in Who Wants To Be A Millionaire if the contestant can narrow down the choices to two.)

Even more important, the computer can only learn from the data that it is given. It would have problems if pictures of a bunch of elephants or firetrucks were later thrown into the mix. Thus, the human being is at least partially supervising the learning and certainly limiting it.  The machine’s model is subject to the limitations and biases of the data that it learned on.

Truly unsupervised learning would occur the way that it does for children. They are let out to observe the world and learn patterns, often without any direct assistance from anyone else. Even with over-scheduling by helicopter parents, children can often freely roam the earth and discover new data and experiences.

Similarly, to have true unsupervised learning of machines, they would have to be able to travel and process the data they see.

At the beginning of his book Life 3.0, Max Tegmark weaves a sci fi tale about a team that built an AI called Prometheus. While it wasn’t directly focused on unsupervised classification, Prometheus was unsupervised and learned on its own. It eventually learned enough to dominate all mankind. But even in this fantasy world, its unsupervised escape only enabled the AI machine to roam the internet, which is not quite the same thing as real life after all.

It is likely, for a while longer, that a significant portion of human behavior will occur outside of the internet 🙂

(And, as we saw with Microsoft’s chatbot Tay, an AI can also learn some unfortunate and incorrect things on the open internet.)

While not quite letting robots roam free in the real world, researchers at Stanford University’s Vision and Learning Lab “have developed iGibson, a realistic, large-scale, and interactive virtual environment within which a robot model can explore, navigate, and perform tasks.” (More about this at A Simulated Playground for Robots)

https://time.com/3983475/hitchbot-assault-video-footage/

There was HitchBOT a few years ago which traveled around the US, although I don’t think that it added to its knowledge along the way, and it eventually met up with some nasty humans. (For more see here and here.)

 

Perhaps self-driving cars or walking robots will eventually be able to see the world freely as we do. Ford Motor Company’s proposed delivery robot roams around, but it is not really equipped for learning. The traveling, learning machine will likely require a lot more computing power and time than we currently use in machine learning.

Of course, there is also work on the computing part of problem, as this July 21st headline shows, “Machines Can Learn Unsupervised ‘At Speed Of Light’ After AI Breakthrough, Scientists Say.” But that’s only the computing part of the problem and not the roaming around the world part.

These more recent projects are evidence that the AI researchers realize their models are not being built in a truly unsupervised way. Despite the hoped-for progress of these projects, for now, that is why data scientists need to be careful how they train and supervise a machine even in unsupervised learning mode.

© 2020 Norman Jacknis, All Rights Reserved

Why Being Virtually There Is Virtually There

If you work in a factory or somewhere else that requires you to touch things or people, the COVID shutdowns and social distancing have clearly been a difficult situation to overcome.

But it seems that the past few months have also been very trying for many people who worked in office settings before COVID set in.  The Brady Bunch meme captured this well.  However, to me, that’s something which is less a reflection of reality than a lack of imagination and experience.

I’m in the minority of folks who have worked remotely for more than ten years.  By now, I’ve forgotten some of the initial hiccups in doing that.  Also, the software, hardware and bandwidth have gotten so much better that the experience is dramatically better than when I started.

So, I’m a little flummoxed by some of what I hear from remote working newbies.  First off, of course, is the complaint that people can’t touch and hug their co-workers anymore.  Haven’t they been to training about inappropriate touching and how some of these physical interactions can come off as harassment?  Even if these folks were in the office, I doubt they would really be going around making physical contact with co-workers.

Then there is the complaint about the how much can be missed in communication when conversations are limited to text messages and emails.  That complaint is correct.  But why is there an assumption that communication is limited to text.  If you had a meeting in a conference room or went to someone’s office for a talk, why can’t you do the same thing via videoconference?

(My own experience is that remote work requires video to be successful because of the importance of non-text elements of human communication.  That’s why I’m assuming that the virtual communication is often via video.)

In the office you could drop by.  Users of Zoom and similar programs are often expected to schedule meetings, but that’s not a requirement.  You can turn on Zoom and, just like in an office, others could connect to you when you want.  They’ll see if your busy.  And, if you’re a really important person, you can set up a waiting room and let them in when you’re ready.

There is even a 21st century version of the 19th century partner desks, although it’s not new.  An example is the always-on Kubi, pictured to the left, that has been around for a few years.

Perch, another startup, summarized the idea in this video a few years back.  Foursquare started using a video portal connecting their engineering teams on the two coasts eight years ago.  (A few months ago before COVID, a deal was reached to merge Foursquare with Factual.)

By the way, the physical office was no utopia of employee interaction.  A variety of studies, most famously the Allen Curve, a very large reduction in interaction if employees were even relatively short physical distances from each other.  With video, all your co-workers are just a click away.  While your interactions with the colleague at the next desk may be less (if you want), your interactions with lots of other colleagues on other floors can happen a lot more easily.

And then, despite evidence of increased productivity and employee happiness with remote work, there is the statement that it decreases innovation and collaboration.

Influential articles, like Workspaces That Move People in the October 2014 issue of the Harvard Business Review, declared that “chance encounters and interactions between knowledge workers improve performance.”

In the physical world, many companies interpreted this as a mandate for open office plans that removed doors and closed offices.  So how did that work out?

According to a later article – The Truth About Open Offices – in the November–December 2019 issue of the Harvard Business Review reported that, “when the firms switched to open offices, face-to-face interactions fell by 70%”.    (More detail can be found in Royal Society journal article of  July 2018 on “The impact of the ‘open’ workspace on human collaboration”.

The late Steve Jobs forcefully pushed the idea of serendipity through casual, random encounters of employees.  That idea was one of the design principles of the new Apple headquarters.  Now with COVID-driven remote work, some writers, like Tiernan Ray in ZDNET on June 24, 2020, are asking “Steve Jobs said Silicon Valley needs serendipity, but is it even possible in a Zoom world?”.

There is nothing inherently in video conferencing that diminishes serendipitous meetings.  Indeed, in the non-business world, there are websites that exist solely to connect strangers together completely at random, like Chatroulette and Omegle.

Without going into the problems those sites have had with inappropriate behavior, the same idea could be used in a different way to periodically connect via video conferencing two employees who otherwise haven’t met recently or at all.  Nor does that have to be completely random.  A company doing this could also use some analytics to determine which employees might be interested in talking with other employees that they haven’t connected with recently.  That would ensure serendipity globally, not just limited to the people who work in the same building.

It’s not that video conferencing is perfect, but there is still an underappreciation of how many virtual equivalents there are of typical office activities – and even less appreciation for some of the benefits of virtual connections compared to physical offices.

To me, the issue is one of a lag that I’ve seen before with technology.  I’ve called this horseless carriage thinking.  Sociologists call it a cultural lag.  As Ashley Crossman has written, this is

“what happens in a social system when the ideals that regulate life do not keep pace with other changes which are often — but not always — technological.”

Some people don’t yet realize and aren’t quite comfortable with what they can do.  For most, time and experience will educate them.

© 2020 Norman Jacknis, All Rights Reserved

A Budget That Copes With Reality

Five years ago, I wrote about the possibility of dynamic budgeting.  I was reminded of this again recently after reading Stephanie Kelton’s eye-opening new book, “The Deficit Myth”.

Her argument is that, since the U.S. dropped the gold standard and fixed exchange rates, it can create as much money as it wants.  The limit is not an illusory national debt number, but inflation.  And in an economy with less than full employment, inflation is not now an issue.  Her explanation of the capacity of the Federal government to spend leads to her suggestions for a more flexible approach to dealing with major economic and social issues.

Although Dr. Kelton was the former staff director for the Democrats on the Senate Budget Committee, she doesn’t devote many words to the tools used in budgeting.  However, the argument that she makes reminds me again that the traditional budget itself has to change, especially shifting to a dynamic budget.

While states and localities are not in the same position as the Federal government, they also face unpredictable conditions and could benefit from a more flexible, dynamic budget.  Of course, in the face of COVID and economic retraction the necessity of re-allocating funds has become more obvious.

In an earlier blog, I wrote about a simple tax app that is now feasible and also eliminates the bumps in incentives that are caused by our current, old-fashioned tax bracket scheme.   This was not using some untested, cutting-edge technology.  Instead, the solution could use phones, tablets and laptops doing simple calculations that these devices have done for decades.

Similarly, what is now well-established technology could be used to overcome the problems with traditional fixed budgeting.  (By the way, the same applies to the budgets that corporations devise.)

So, what are the problems that everyone knows exist with budgets?

  1. They’re wrong the day they are approved since they are trying to predict precisely a future that cannot be known precisely ahead of time. This error is made worse by the early deadlines in the typical budget process.  If you run a department, you are likely to be asked by the budget office to prepare estimates for what you’ll need in a period that will go as far as 18 or even 24 months into the future.
  2. It’s not clear how the estimates are derived. Typically, there are no underlying rules or models, just the addition of personnel and other basic costs that are adjusted from the last year.  This is despite the fact that some things are fairly well known.  For example, it is fairly straightforward to estimate the cost of paying unemployment to an average individual.  What is harder is to figure out how many unemployed people there will be – and, of course, you need to know the total number of unemployed and the average cost in order to compute the total amount of money needed.
  3. Given these problems, in practice during any given budget year, all kinds of exceptions and deviations occur in the face of reality. But the rest of the budget is not readjusted, although the budget staff will often hold back money that was approved as it takes from “Peter to pay Paul”.  The process often seems and is very arbitrary.

Operating in the real world, of course, requires continual adjustments.  Such adjustments can best be accommodated if the traditional fixed budget was replaced by a dynamic budget at the start of the budget process.

One way of doing this is familiar to almost every reader of this blog – the spreadsheet.  The cells in spreadsheets don’t always have hard fixed numbers, like fixed budgets.  Instead many of those spreadsheets have formulas.

And Congress could also not so much the individual amounts for each agency or program, but their relative priorities under different scenarios.  Thus, in a recession there would be a need for more unemployment insurance funding, but that would recede in the face of other priorities if the economy is booming.

To go back to the unemployment example, the actual amount needed in the budget will change as we get closer to the month being estimated and can be more accurate in its estimates of the number of people who will be unemployed.

Of course, the reader who knows my background won’t be surprised that I think the formulas in these cells could be derived by the use of some smart analytics and machine learning.  Ultimately, these methods could be enhanced with simulations – after all, what is a budget but an attempt to simulate a future period of financial needs?

More on that in another post sometime in the future.

© 2020 Norman Jacknis, All Rights Reserved

Words Matter In Building Intelligent Communities

The Intelligent Community Forum (ICF) is an international group of city, town and regional leaders as well as scholars and other experts who are focused on quality of life for residents and intelligently responding to the challenges and opportunities provided by a world and an economy that is increasingly based on broadband and technology.

To quote from their website: “The Intelligent Community Forum is a global network of cities and regions with a think tank at its center.  Its mission is to help communities in the digital age find a new path to economic development and community growth – one that creates inclusive prosperity, tackles social challenges, and enriches quality of life.”

Since 1999, ICF has held an annual contest and announced an award to intelligent communities that go through an extensive investigation and comparison to see how well they are achieving these goals.  Of hundreds of applications, some are selected for an initial, more in-depth assessment and become semi-finalists in a group called the Smart21.

Then the Smart21 are culled to a smaller list of the Top7 most intelligent communities in the world each year.  There are rigorous quantitative evaluations conducted by an outside consultancy, field trips, a review by an independent panel of leading experts/academic researchers and a vote by a larger group of experts.

An especially important part of the selection of the Top7 from the Smart21 is an independent panel’s assessment of the projects and initiatives that justify a community’s claim to being intelligent.

It may not always be clear to communities what separates these seven most intelligent communities from the rest.  After all, these descriptions are just words.  We understand that words matter in political campaigns.  But words matter outside of politics in initiatives, big and small, that are part of governing.

Could the words that leaders use be part of what separates successful intelligent initiatives from those of others who are less successful in building intelligent communities?

In an attempt to answer that question, I obtained and analyzed the applications submitted over the last ten years.  Then, using the methods of analytics and machine learning that I teach at Columbia University, I sought to determine if there was a difference in how the leaders of the Top7 described what they were doing in comparison with those who did not make the cut.

Although at a superficial level, the descriptions seem somewhat similar, it turns out that the leaders of more successful intelligent community initiatives did, indeed, describe those initiatives differently from the leaders of less successful initiatives.

The first significant difference was that the descriptions of the Top7 had more to say about their initiatives, since apparently they had more accomplishments to discuss.  Their descriptions had less talk about future plans and more about past successes.

In describing the results of their initiatives so far, they used numbers more often, providing greater evidence of those results.  Even though they were discussing technology-based or otherwise sometimes complex projects, they used more informal, less dense and less bureaucratic language.

Among the topics they emphasized, engagement and leadership as well as the technology infrastructure primarily stood out.  Less important, but also a differentiation, the more successful leaders emphasized the smart city, innovation and economic growth benefits.

For those leaders who wish to know what will gain them recognition for real successes in transforming their jurisdictions into intelligent communities, the results would indicate these simple rules:

  • Have and highlight a solid technology infrastructure.
  • True success, however, comes from extensive civic engagement and frequently mentioning that engagement and the role of civic leadership in moving the community forward.
  • Less bureaucratic formality and more stress on results (quantitative measures of outcomes) in their public statements is also associated with greater success in these initiatives.

On the other hand, a laundry list of projects that are not tied to civic engagement and necessary technology, particularly if those projects have no real track record, is not the path to outstanding success – even if they check off the six wide-ranging factors that the ICF expects of intelligent communities.

While words do matter, it is also true that other factors can impact the success or failure of major public initiatives.  However, these too can be added into the models of success or failure, along with the results of the textual analytics.

Overall, the results of this analysis can help public officials understand a little better how they need to think about what they are doing and then properly describe it to their citizens and others outside of their community.  This will help them to be more successful, most importantly for their communities and, if they wish, as well in the ICF awards process.

© 2020 Norman Jacknis, All Rights Reserved

Working From Home Will Change Cities

Just three years ago, the New York Times had this headline Why Big Cities Thrive, and Smaller Ones Are Being Left Behind” – trumpeting the victory of big cities over their smaller competitors, not to mention the suburbs and rural areas.  At the top of that heap, of course, was New York City.

Now the headlines are different:

A week ago, the always perceptive Claire Cain Miller added another perspective in an Upshot article that was headlined with the question “Is the Five-Day Office Week Over?”  Her answer, in the sub-title, was that the “pandemic has shown employees and employers alike that there’s value in working from home — at least, some of the time.”

This chart summarizes a part of what she wrote about.  As Miller’s story makes quite clear, it is important to realize that some of what has happened during the COVID pandemic will continue after we have finally overcome it and people are free to resume activities anywhere.  Some of the current refugees from cities will likely move back to the cities and many city residents remained there, of course.  But the point is that many of these old, returning and new urban residents will have different patterns of work and that will require cities to change.

While the focus of this was mostly on remote office work, some observers note that cities still have lots of workers who do not work in offices.  While clearly there are numerous jobs that require the laying of hands on something or someone, there are also blue-collar jobs that do not strictly require a physical presence.

I have seen factories that can be remotely controlled, even before the pandemic.  Now this option is getting even more attention.  One of the technology trade magazines, recently (7/3/2020) had a storied with this headline – “Remote factories: The next frontier of remote work.”  In another example, GE has been offering technology solutions to enable the employees of utility companies to remotely control facilities – see “Remote Control: Utilities and Manufacturers Turn to Automation Software To Operate From Home During Outbreak”.

So perhaps the first blush of victory of big cities, like the British occupation of New York City during the American Revolution or the invasion of France in World War II, did not indicate how the war would end.  Perhaps the war has not ended because, in an internet age where many people can work from home, home does not have to be in big cities, after all, or if it is in a big city it does not have to be in a gleaming office tower.

These trends and the potential of the internet and technology to disrupt traditional urban patters, of course, have been clear for more than ten years.  But few mayors and other urban leaders paid attention.  After all they were in a recent period in which they could just ride the wave of what seemed to be ever increasing density and growth in cities – especially propelled by young people seeking office jobs in their cities.  This was a wonderful dream, combining the urban heft of the industrial age with cleaner occupations.

Now the possibility of a different world is hitting them in the face.  It is not merely a switch from factory to office employment, but a change from industrial era work patterns too.  Among other things that change means that people do not all have to show up in the same place at the same time.  This change requires city leaders to start thinking about all the various ways that they need to adjust their traditional thinking.

Here are just three of the ways that cities will be impacted by an increasing percentage of work being done at home:

  • Property taxes in most cities usually have higher rates on commercial property than on residential property. Indeed, commercial real estate has been the goose that has laid the golden eggs for those cities which have had flourishing downtowns.  But if the amount of square footage in commercial property decreases, the value of those properties and hence the taxes will go down.  On the other hand, most elected officials are loath to raise taxes on residential real estate, even if those residences are now generating income through commercial activities – a job at home most of the week.
  • Traffic and transit patterns used to be quite predictable. There was rush hour in the morning and afternoon when everyone was trying to get the same densely packed core.  With fewer people coming to the office every day that will change.  Even those who meet in downtown may not be going there now for the 9:00 AM start of the work day, but for a lunch meeting.  Then there is the matter of increasing and relatively small deliveries to homes, rather than large deliveries to stores in the central business district.  This too turns upside down the traditional patterns.
  • Excitement and enticement have, of course, been traditional advantages of cities. Downtown is where the action is.  Even that is changing.  Although it is still fun to go to Broadway, for example, I suspect that most people had a better view of the actors in the Disney Plus presentation of Hamilton than did those who paid a lot more money to sit somewhere many rows back even in the orchestra section of the theater.  At some point, people will balance this out.  So, cities are going to have be a lot more creative and find new ways, new magic to bring people to their core.

Cities have evolved before.  In the 18th century, American cities thrived on the traffic going through their ports.  While the ports still played a role, in later centuries, cities grew dramatically and thrived on their factories and industrial might.  Then they replaced factories with offices.

A transition to an as yet unclear future version of cities can be done and will be done successfully by those city leaders who don’t deny what is happening, but instead respond with a new vision – or at least new experimentation that they can learn from.

© 2020 Norman Jacknis, All Rights Reserved

Is It 1832 Or 2020? Virtual Convention Or Something New?

In these blogs, I’ve often noted how people seem wedded to old ways of thinking, even when those old ways are dressed up in new clothes.

Despite all the technology around us, it’s amazing how little some things have changed.  Too often, today seems like it was 120 years ago when people talked and thought about “horseless carriages” rather than the new thing that was possible – the car with all the possibilities it opened.

So it was with interest that I read this recent story – “Democrats confirm plans for nearly all-virtual convention

“Democrats will hold an almost entirely virtual presidential nominating convention Aug. 17-20 in Milwaukee using live broadcasts and online streaming, party officials said Wednesday.”

Party conventions have been around since 1832.  They were changed a little bit when they went on radio and then later on television.  But mostly they have always been filled with lots of people hearing speeches, usually from the podium.

Following in this tradition going back to 1832, the Democratic Party is going to have a convention, but we can’t have lots of people gathered together with COVID-19.  This one will be “a virtual convention in Milwaukee” which seems like a contradiction – something that is both virtual but is happening in a physical place?  I guess it only means that Joe Biden will be in Milwaukee along with the convention officials to handle procedures.

Indeed, it’s not entirely clear what this convention will look like.  In addition to the main procedures in Milwaukee, the article indicates that “Democrats plan other events in satellite locations around the country to broadcast as part of the convention”.  I assume that will be similar.

“Kirshner knows how it’s done: He has produced every Democratic national convention since 1992.”

Hopefully this will be different from every convention since 1832 – or even 1992!

Instead of the standard speeches on the screen or even other activities that are just video of something that could occur on-stage, do something that is more up-to-date.  This will show that Biden will not only be a different kind of President than Trump, but that he also will know how to lead us into the future.

Why not do something that takes advantage of not having to be in a convention hall?

For example, how about a walk (or drive, if necessary) through the speaker’s neighborhood (masks on) explaining what the problems are and what Biden wants to do about those problems?

My suggestions are limited since creative arts are not my specialty, but I do see an opportunity to do something different.  It is a good guess that Hollywood is also eager to help defeat Trump and would offer all kinds of innovative assistance.  Make it an illustration of American collaboration at its best.

This should not be an unusual idea for the Biden organization.  Among his top advisors are Zeppa Kreager, his Chief of Staff, formerly the Director of the Creative Alliance (part of Civic Nation), and Kate Bedingfield, Deputy Campaign Manager and Communications Director, formerly Vice President at Monumental Sports and Entertainment.

Of course, the Trump campaign could take the same approach, but they do not seem interested and Trump obviously adores a large in-person audience.  So there is a real opportunity for Biden to differentiate himself.

Beyond the short-term electoral considerations, this would also make political history by setting a new pattern for political conventions.

© 2020 Norman Jacknis, All Rights Reserved

Trump And Cuomo COVID-19 Press Conferences

Like many other people who have been watching the COVID-19 press conferences held by Trump and Cuomo, I came away with a very different feeling from each.  Beyond the obvious policy and partisan differences, I felt there is something more going on.

Coincidentally, I’ve been doing some research on text analytics/natural language processing on a different topic.  So, I decided to use these same research tools on the transcripts of their press conferences from April 9 through April 16, 2020.  (Thank you to the folks at Rev.com for making available these transcripts.)

One of the best approaches is known by its initials, LIWC, and was created some time ago by Pennebaker and colleagues to assess especially the psycho-social dimensions of texts.   It’s worth noting that this assessment is based purely on the text – their words – and doesn’t include non-verbal communications, like body language.

While there were some unsurprising results to people familiar with both Trump and Cuomo, there are also some interesting nuances in the words they used.

Here are the most significant contrasts:

  • The most dramatic distinction between the two had to do with emotional tone. Trump’s words had almost twice the emotional content of Cuomo’s, including words like “nice”, although maybe the use of that word maybe should not be taken at face value.
  • Trump also spoke of rewards/benefits and money about 50% more often than Cuomo.
  • Trump emphasized allies and friends about twenty percent more often than Cuomo.
  • Cuomo used words that evoked health, anxiety/pain, home and family two to three times more often than Trump.
  • Cuomo asked more than twice as many questions, although some of these could be sort of rhetorical – like “what do you think?”
  • However, Trump was 50% more tentative in his declarations than Cuomo, whereas Cuomo had greater expressions of certainty than Trump.
  • While both men spoke about the present tense much more than the future, Cuomo’s use of the present was greater than Trump’s. On the other hand, Trump’s use of the future tense and the past tense was greater than Cuomo’s.
  • Trump used “we” a little more often than Cuomo and much more than he used “you”. Cuomo used “you” between two and three times more often than Trump.  Trump’s use of “they” even surpassed his use of you.

Distinctions of this kind are never crystal clear, even with sophisticated text analytics and machine learning algorithms.  The ambiguity of human speech is not just a problem for machines, but also for people communicating with each other.

But these comparisons from text analytics do provide some semantic evidence for the comments by non-partisan observers that Cuomo seems more in command.  This may be because the features of his talks would seem to better fit the movie portrayal and the average American’s idea of leadership in a crisis – calm, compassionate, focused on the task at hand.

© 2020 Norman Jacknis, All Rights Reserved

Robots Just Want To Have Fun!

There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.

In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.

Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.

Robots are also showing off their fun and increasingly creative side.

Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.

Dancing

Last August, there was a massive robot dance in Guangzhou, China. It achieved a Guinness World Record for for the “most robots dancing simultaneously”. See https://www.youtube.com/watch?v=ouZb_Yb6HPg or http://money.cnn.com/video/technology/future/2017/08/22/dancing-robots-world-record-china.cnnmoney/index.html

Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc

Music

Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.

The Flow Machines project in Europe has also been using AI to create music in the style of more famous composers. See, for instance, its DeepBach, “a deep learning tool for automatic generation of chorales in Bach’s style”. https://www.youtube.com/watch?time_continue=2&v=QiBM7-5hA6o

Singing

Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s

Also, last August, there was a song composed by AI, although sung by a human – https://www.youtube.com/watch?v=XUs6CznN8pw&feature=youtu.be

There is even AI that will generate poetry – um, song lyrics.

Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:

Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.

Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.

Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.

An altered state of manic episodes,
A journey through the long and winding roads.

Not exactly upbeat, but you could well imagine this being a song too.

Finally, there is even the HRP-4C (Miim), which has been under development in Japan for years. Here’s her act –  https://www.youtube.com/watch?v=QCuh1pPMvM4#t=3m25s

All singing, all dancing, indeed!

© 2018 Norman Jacknis, All Rights Reserved

More Than A Smart City?

The huge Smart Cities New York 2018 conference started today. It is billed as:

“North America’s leading global conference to address and highlight critical solution-based issues that cities are facing as we move into the 21st century. … SCNY brings together top thought leaders and senior members of the private and public sector to discuss investments in physical and digital infrastructure, health, education, sustainability, security, mobility, workforce development, to ensure there is an increased quality of life for all citizens as we move into the Fourth Industrial Revolution.”

A few hours ago, I helped run an Intelligent Community Forum Workshop on “Future-Proofing Beyond Tech: Community-Based Solutions”. I also spoke there about “Technology That Matters”, which this post will quickly review.

As with so much of ICF’s work, the key question for this part of the workshop was: Once you’ve laid down the basic technology of broadband and your residents are connected, what are the next steps to make a difference in residents’ lives?

I have previously focused on the need for cities to encourage their residents to take advantage of the global opportunities in business, education, health, etc. that becomes possible when you are connected to the whole world.

Instead in this session, I discussed six steps that are more local.

1. Apps For Urban Life

This is the simplest first step and many cities have encouraged local or not-so-local entrepreneurs to create apps for their residents.

But many cities that are not as large as New York are still waiting for those apps. I gave the example of Buenos Aires as a city that didn’t wait and built more than a dozen of its own apps.

I also reminded attendees that there are many potential, useful apps for their residents which cannot justify enough profit to be of interest to the private sector, so the government will have to create these apps on their own.

2. Community Generation Of Urban Data

While some cities have posted their open data, there is much data about urban life that the residents can collect. The most popular example is the community generation of environmental data, with such products like the Egg, the Smart Citizen Kit for Urban Sensing, the Sensor Umbrella and even more sophisticated tools like Placemeter.

But the data doesn’t just have to be about the physical environment. The US National Archives has been quite successful in getting citizen volunteers to generate data – and meta-data – about the documents in its custody.

The attitude which urban leaders need is best summarized by Professor Michael Batty of the University College London:

“Thinking of cities not as smart but as a key information processor is a good analogy and worth exploiting a lot, thus reflecting the great transition we are living through from a world built around energy to one built around information.”

3. The Community Helps Make Sense Of The Data

Once the data has been collected, someone needs to help make sense of it. This effort too can draw upon the diverse skills in the city. Platforms like Zooniverse, with more than a million volunteers, are good examples of what is called citizen science. For the last few years, there has been OpenData Day around the world, in which cities make available their data for analysis and use by techies. But I would go further and describe this effort as “popular analytics” – the virtual collaboration of both government specialists and residents to better understand the problems and patterns of their city.

4. Co-Creating Policy

Once the problems and opportunities are better understood, it is time to create urban policies in response.  With the foundation of good connectivity, it becomes possible for citizens to conveniently participate in the co-creation of policy. I highlighted examples from the citizen consultations in Lambeth, England to those in Taiwan, as well as the even more ambitious CrowdLaw project that is housed not far from the Smart Cities conference location.

5. Co-Production Of Services

Then next is the execution of policy. As I’ve written before, public services do not necessarily always have to be delivered by paid civil servants (or even better paid companies with government contracts). The residents of a city can help be co-producers of services, as exemplified in Scotland and New Zealand.

6. Co-Creation Of The City Itself

Obviously, the people who build buildings or even tend to gardens in cities have always had a role in defining the physical nature of a city. What’s different in a city that has good connectivity is the explosion of possible ways that people can modify and enhance that traditional physical environment. Beyond even augmented reality, new spaces that blend the physical and digital can be created anywhere – on sidewalks, walls, even in water spray. And the residents can interact and modify these spaces. In that way, the residents are constantly co-creating and recreating the urban environment.

The hope of ICF is that the attendees at Smart Cities New York start moving beyond the base notion of a smart city to the more impactful idea of an intelligent city that uses all the new technologies to enhance the quality of life and engagement of its residents.

© 2018 Norman Jacknis, All Rights Reserved

When Strategic Thinking Needs A Refresh

This year I created a new, week-long, all-day course at Columbia University on Strategy and Analytics. The course focuses on how to think about strategy both for the organization as a whole as well as the analytics team. It also shows the ways that analytics can help determine the best strategy and assess how well that strategy is succeeding.

In designing the course, it was apparent that much of the established literature in strategy is based on ideas developed decades ago. Michael Porter, for example, is still the source of much thinking and teaching about strategy and competition.

Perhaps a dollop of Christensen’s disruptive innovation might be added into the mix, although that idea is not any longer new. Worse, the concept has become so popularly diluted that too often every change is mistakenly treated as disruptive.

Even the somewhat alternative perspective described in the book “Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant” is now more than ten years old.

Of the well-established business “gurus”, perhaps only Gary Hamel has adjusted his perspective in this century – see, for example, this presentation.

But the world has changed. Certainly, the growth of huge Internet-based companies has highlighted strategies that do not necessarily come out of the older ideas.

So, who are the new strategists worthy of inclusion in a graduate course in 2018?

The students were exposed to the work of fellow faculty at Columbia University, especially Leonard Sherman’s “If You’re in a Dogfight, Become a Cat! – Strategies for Long-Term Growth” and Rita Gunther McGrath’s “The End Of Competitive Advantage: How To Keep Your Strategy Moving As Fast As Your Business”.

But in this post, the emphasis in on strategic lessons drawn from this century’s business experience with the Internet, including multi-sided platforms and digital content traps. For that there is “Matchmakers – the new economics of multisided platforms” by David S Evans and Richard Schmalensee. And also Bharat Anand’s “The Content Trap: A Strategist’s Guide to Digital Change”.

For Porter and other earlier thinkers, the focus was mostly on the other players that they were competing against (or decided not to compete against). For Anand, the role of the customer and the network of customers becomes more central in determining strategy. For Evans and Schmalensee, getting a network of customers to succeed is not simple and requires a different kind of strategic framework than industrial competition.

Why emphasize these two books? It might seem that these books only focus on digital businesses, not the traditional manufacturers, retailers and service companies that previous strategists worked at.

But many now argue that all businesses are digital, just to varying degrees. For the last few year we’ve seen the repeated headline that “every business is now a digital business” (or some minor variation) from Forbes, Accenture, the Wharton School of the University of Pennsylvania, among others you may not have heard of. And about a year ago, we read that “Ford abruptly replaces CEO to target digital transformation”.

Consider then the case of GE, one of the USA’s great industrial giants, which offers a good illustration of the situation facing many companies. A couple of years ago, it expressed its desire to “Become a Digital Industrial Company”. Last week, Steve Lohr of the New York Times reported that “G.E. Makes a Sharp ‘Pivot’ on Digital” because of its difficulty making the transition to digital and especially making the transition a marketing success.

At least in part, the company’s lack of success could be blamed on its failure to fully embrace the intellectual shift from older strategic frameworks to the more digital 21st century strategy that thinkers like Anand, Evans and Schmalensee describe.

© 2018 Norman Jacknis, All Rights Reserved

Are Any Small Towns Flourishing?

We hear and read how the very largest cities are growing, attractive places for millennials and just about anyone who is not of retirement age. The story is that the big cities have had almost all the economic gains of the last decade or so, while the economic life has been sucked out of small towns and rural areas.

The images above are what seem to be in many minds today — the vibrant big city versus the dying countryside.

Yet, we are in a digital age when everyone is connected to everyone else on the globe, thanks to the Internet. Why hasn’t this theory of economic potential from the Internet been true for the countryside?

Well, it turns out that it is true. Those rural areas that do in fact have widespread access to the Internet are flourishing. These towns with broadband are exemplary, but unfortunately not the majority of towns.

Professor Roberto Gallardo of Purdue’s Purdue Center for Regional Development has dug deep into the data about broadband and growth. The results have recently been published in an article that Robert Bell and I helped write. You can see it below.

So, the implication of the image above is half right — this is a life-or-death issue for many small towns. The hopeful note is that those with broadband and the wisdom to use it for quality of life will not die in this century.

© 2018 Norman Jacknis, All Rights Reserved


[This article is republished from the Daily Yonder , a non-profit media organization that specializes in rural trends and thus filling the vacuum of news coverage about the countryside.]

When It Comes to Broadband, Millennials Vote with Their Feet

By Roberto Gallardo — Robert Bell — Norman Jacknis

April 11, 2018

When they live in remote rural areas, millennials are more likely to reside in a county that has better digital access. The findings could indicate that the digital economy is helping decentralize the economy, not just clustering economic change in the cities that are already the largest.

Sources: USDA; Pew Research; US Census Bureau; Purdue Center for Regional Development This graph shows that the number of Millennials and Gen Xers living in the nation’s most rural counties is on the increase in counties with a low “digital divide index.” The graph splits the population in “noncore” (or rural) counties into three different generations. Then, within each generation, the graph looks at population change based on the Digital Divide Index. The index measures the digital divide using two sets of criteria, one that looks at the availability and adoption of broadband and another set that looks at socio-economic factors such as income and education levels that affect broadband use. Counties are split into five groups or quintiles based on the digital divide index, with group №1 (orange) having the most access and №5 (green) having the lowest.

Cities are the future and the countryside is doomed, as far as population growth, jobs, culture and lifestyle are concerned. Right?

Certainly, that is the mainstream view expressed by analysts at organizations such as Brookings. This type of analysis says the “clustering” of business that occurred during the industrial age will only accelerate as the digital economy takes hold. This argument says digital economies will only deepen and accelerate the competitive advantage that cities have always had in modern times.

But other pundits and researchers argue that the digital age will result in “decentralization” and a more level playing field between urban and rural. Digital technologies are insensitive to location and distance and potentially offer workers a much greater range of opportunities than ever before.

The real question is whether a rural decline is inevitable or if the digital economy has characteristics that are already starting to write a different story for rural America. We have recently completed research that suggests it is.

Millennial Trends

While metro areas still capture the majority of new jobs and population gains, there is some anecdotal evidence pointing in a different direction. Consider a CBS article that notes how, due to high housing costs, horrible traffic, and terrible work-life balances, Bend, Oregon, is seeing an influx of teleworkers from Silicon Valley. The New York Times has reported on the sudden influx of escapees from the Valley that is transforming Reno, Nevada — for good or ill, it is not yet clear.

Likewise, a Fortune article argued that “millennials are about to leave cities in droves” and the Telegraph mentioned “there is a great exodus going on from cities” in addition to Time magazine reporting that the millennial population of certain U.S. cities has peaked.

Why millennials? Well, dubbed the first digital-native generation, their migration patterns could indicate the beginning of a digital age-related decentralization.

An Age-Based Look at Population Patterns

In search of insight, we looked at population change among the three generations that make up the entire country’s workforce: millennials, generation X, and baby boomers.

First, we defined each generation. Table 1 shows the age ranges of each generation according to the Pew Research Center, both in 2010 and 2016, as well as the age categories used to measure each generation. While not an exact match, categories are consistent across years and geographies.

In addition to looking at generations, we used the Office of Management core-based typology to control by county type (metropolitan, small city [micropolitan], and rural [noncore]). To factor in the influence of digital access affects local economies, we used the Digital Divide Index. The DDI, developed by the Purdue Center for Regional Development, ranges from zero to 100. The higher the score, the higher the digital divide. There are two components to the Digital Divide Index: 1) broadband infrastructure/adoption and 2) socioeconomic characteristics known to affect technology adoption.

Looking at overall trends, it does look like the digital age is not having a decentralization effect. To the contrary, according to data from the economic modeling service Emsi, the U.S. added 19.4 million jobs between 2010 and 2016. Of these, 94.6 percent were located in metropolitan counties compared to only 1.6 percent in rural counties.

Population growth tells a similar story. Virtually the entire growth in U.S. population of 14.4 million between 2010 and 2016 occurred in metropolitan counties, according to the Census Bureau. The graph below (Figure 1) shows the total population change overall and by generation and county type. As expected, the number of baby boomers (far right side of the graph) is falling across all county types while millennials and generation x (middle two sets of bars) are growing only in metro counties.

But there is a different story. When looking at only rural counties (what the OMB classification system calls “noncore”) divided into five equal groups or quintiles based on their digital divide (1 = lowest divide while 5 = highest divide), the figure at the very top of this article shows that rural counties experienced an increase in millennials where the digital divide was lowest. (The millennial population grew by 2.3 percent in rural counties where the digital divide was the lowest.) Important to note is that this same pattern occurs in metropolitan and small city counties as well.

Impact on the “Really Rural” County

“Urban” and “rural” can be tricky terms when it comes to demographics. The Census Bureau reports that 80% of the population lives in urban areas. Seventy-five percent of those “urban” areas, however, are actually small towns with populations of under 20,000. They are often geographically large, with a population density that falls off rapidly once you leave the center of town.

On the other hand, some rural counties are adjacent to metro areas and may benefit disproportionately from their location or even be considered metropolitan due to their commuting patterns. Because of this, we turned to another typology developed by the U.S. Department of Agriculture Economic Research Service that groups counties into nine types ranging from large metro areas to medium size counties adjacent to metro areas to small counties not adjacent to metro areas.

Figure 3 (below) shows counties considered completely rural or with an urban population of less than 2,500, not adjacent to a metro area. Among these counties, about 420 in total, those with the lowest digital divide experienced a 13.5 percent increase in millennials between 2010 and 2016. In other words, in the nation’s “most rural” counties, the millennial population increased significantly when those counties had better broadband access.

Sources: USDA; Pew Research; US Census Bureau; Purdue Center for Regional Development. This graph shows population change by generation and “DDI” quintile in the nation’s most rural counties (rural counties that are farthest from metropolitan areas). In rural counties with the best digital access (a low digital divide index), the number of Millennials and Gen Xers increased.

The New Connected Countryside: A Work in Progress

To conclude, if you just look at overall numbers, our population seems to be behaving just like they did in the industrial age — moving to cities where jobs and people are concentrated. Rural areas that lag in broadband connectivity and digital literacy will continue to suffer from these old trends.

However, the digital age is young. Its full effects are still to be felt. Remember it took several decades for electricity or the automobile to revolutionize society. Besides, areas outside metro areas lag in broadband connectivity and digital literacy, limiting their potential to leverage the technology to affect their quality of life, potentially reversing migration trends.

Whether or not decentralization will take place remains to be seen. What is clear though is that (while other factors are having an impact, as well) any community attempting to retain or attract millennials need to address their digital divide, both in terms of broadband access and adoption/use.

In other words, our data analysis suggests that if a rural area has widely available and adopted broadband, it can start to successfully attract or retain millennials.

Roberto Gallardo is assistant director of the Purdue Center for Regional Development and a senior fellow at the Center for Rural Strategies, which publishes the Daily Yonder. Robert Bell is co-founder of the Intelligent Community Forum. Norman Jacknis is a senior fellow at the Intelligent Community Forum and on the faculty of Columbia University.

Too Many Unhelpful Search Results

This is a brief follow up to my last post about how librarians and artificial intelligence experts can
get us all beyond mere curation and our frustrations using web search

.

In their day-to-day Google searches many people end up frustrated. But they assume that the problem is their own lack of expertise in framing the search request.

In these days of advancing natural language algorithms that isn’t a very good explanation for users or a good excuse for Google.

We all have our own favorite examples, but here’s mine because it directly speaks to lost opportunities to use the Internet as a tool of economic development.

Imagine an Internet marketing expert who has an appointment with a local chemical engineering firm to make a pitch for her services and help them grow their business. Wanting to be prepared, she goes to Google with a simple search request: “marketing for chemical engineering firms”. Pretty simple, right?

Here’s what she’ll get:

She’s unlikely to live long enough to read all 43,100,000+ hits, never mind reading them before her meeting. And, aside from an ad on the right from a possible competitor, there’s not much in the list of non-advertising links that will help her understand the marketing issues facing a potential client.

This is not how the sum of all human knowledge – i.e., the Internet – is supposed to work. But it’s all too common.

This is the reason why, in a knowledge economy, I place such a great emphasis on deep organization, accessibility and relevance of information.

© 2017 Norman Jacknis, All Rights Reserved

What Comes After Curation?

[Note: I’m President of the board of the Metropolitan New York Library Council, but this post is only my own view.]

A few weeks ago, I wrote about the second chance given to libraries, as Google’s role in the life of web users slowly diminishes. Of course, for at least a few years, one of the responses of librarians to the growth of the digital world has been to re-envision libraries as curators of knowledge, instead of mere collectors of documents. It’s not a bad start in a transition.

Indeed, this idea has also been picked up by all kinds of online sites, not just libraries. Everyone it seems wants to aggregate just the right mix of articles from other sources that might interest you.

But, from my perspective, curation is an inadequate solution to the bigger problem this digital knowledge century has created – we don’t have time to read everything. Filtering out the many things I might not want to read at all doesn’t help me much. I still end up having too much to read.

And we end up in the situation summed up succinctly by the acronym TL;DR, too long, didn’t read. (Or my version in response to getting millions of Google hits – TMI, TLK “too much information, too little knowledge”.)

The AQAINT project (2010) of the US government’s National Institute of Standards and Technology (NIST) stated this problem very well:

“How do we find topically relevant, semantically related, timely information in massive amounts of data in diverse languages, formats, and genres? Given the incredible amounts of information available today, merely reducing the size of the haystack is not enough; information professionals … require timely, focused answers to complex questions.”

Like NIST, what I really want – maybe what you want or need too? – is someone to summarize everything out there and create a new body of work that tells me just what I need to know in as few words as possible.

Researchers call this abstractive summarization and this is not an easy problem to solve. But there has been some interesting work going on in various universities and research labs.

At Columbia University, Professor Kathleen McKeown and her research colleagues developed “NewsBlaster” several years ago to organize and summarize the day’s news.

Among other companies, Automated Insights has developed some practical solutions to the overall problem. Their Wordsmith software has been used, for example, by the Associated Press “to transform raw earnings data into thousands of publishable stories, covering hundreds more quarterly earnings stories than previous manual efforts”.

For all their clients, they claim to produce “over 1.5 billion narratives annually”. And these are so well done that the New York Times had an article about it that was titled “If An Algorithm Wrote This, How Would You Even Know?”.

The next step, of course, is to combine many different data sources and generate articles about them for each person interested in that combination of sources.

Just a few months ago, Salesforce’s research team announced a major advance in summarization. Their motivation, by the way, is the same as mine:

“In 2017, the average person is expected to spend 12 hours and 7 minutes every day consuming some form of media and that number will only go up from here… Today the struggle is not getting access to information, it is keeping up with the influx of news, social media, email, and texts. But what if you could get accurate and succinct summaries of the key points…?”

Maluuba, acquired by Microsoft, has been continuing earlier research too. As they describe their research on “information-seeking behaviour”:

“The research at Maluuba is tackling major milestones to create AI agents that can efficiently and autonomously understand the world, look for information, and communicate their findings to humans.”

Librarians have skills that can contribute to the development of this branch of artificial intelligence. While those skills are necessary, they aren’t sufficient and a joint effort between AI researchers and the library world is required.

However, if librarians joined in this adventure, they could also offer the means of delivering this focused knowledge to the public in a more useful way than just dumping it into the Internet.

As I’ve blogged a few months ago:

Librarians have many skills to add to the task of “organizing the world’s information, and making it universally accessible”. But as non-profit organizations interested in the public good, libraries can also ensure that the next generation of knowledge tools – surpassing Google search – is developed for non-commercial purposes.

So, what comes after everyone has tried curation? Abstractive summarization aided by artificial intelligence software, that’s what!

© 2017 Norman Jacknis, All Rights Reserved

What Can You Learn From Virtual Mirrors?

A virtual mirror allows someone to use a camera and have that image displayed on a large LED screen. Better yet, with the right software, it can change the image. With that ability, virtual mirrors have been used to see what new glasses look like or to try on dresses – a virtual, flexible fitting room.

image

Virtual mirrors and their equivalent as smart phone apps have been around for the last couple of years. There are examples from all over the world. Here are just a couple:

image
image

Marketers have already thought of extending this to social media, as one newspaper reported with a story titled “Every woman’s new best friend? Hyper-realistic new virtual mirror lets you to try on clothes at the flick of the wrist and instantly share the images online”.

This all provides a nice experience for customers and may even help sell a particular item to them. But that’s only the beginning.

Virtual mirrors are a tremendous source of data about consumer behavior. Consider that the system can record every item the consumer looked at and then what she or he bought. Add to that the information about the person that can be detected – hair color, height, etc. With the application of the right analytics, a company can develop insights about how and why some products are successful – for example a particular kind of dress may be what short or tall women are really looking for.

With eye tracking devices, such as those from Tobii, connected to the virtual mirror, even more data can be collected on exactly what the consumer is looking at – for example, the last part of a dress that she looked at before deciding to buy or not to buy.

Going beyond that, an analysis can be done of facial (and body) expressions. I’ve written before about affective computing which is the technology is developing to do and to respond to this kind of measurement.  

[For some additional background on affective computing, see Wikipedia and MIT Media Lab’s website.]

By fully gathering all the data surrounding a consumer’s use of the virtual mirror, its value becomes much more than merely improving the immediate customer experience. In a world of what many consider big data, this adds much more data for the analytics experts on the marketing and product teams to investigate.

Alas, I haven’t seen widespread adoption and merger of these technologies. But the first retailer to move forward this way will have a great competitive advantage. This is especially true for brick-and-mortar retailers who can observe and measure a wider range of consumer behavior than can their purely e-commerce competitors.

16.00

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Calibri”,sans-serif;
mso-bidi-font-family:”Times New Roman”;}

© 2017 Norman Jacknis, All Rights
Reserved

Libraries And The Story Of Apple

[Note: I’m President of the board of the Metropolitan New York Library Council, but this post is only my own view.]

For some time now, the library world and its supporters have worried about the rise of the Google search engine. Here’s just a sample of articles from the last ten years that express this concern and, of course, push back against the Google tide:

And there was also John Palfrey’s 2015 book, “BiblioTech: Why Libraries Matter More Than Ever in the Age of Google”, which shares some themes of this post.

This concern has had such a profound effect that many libraries have effectively curtailed their reference librarian services as people instead “Google it”.

No doubt Google is formidable. While there have been ups and downs (like 2015) in Google’s share of the search engine market, it is obviously very high. Some estimates put it at 80% or higher.

But the world is changing and perhaps librarians aren’t aware of a nascent opportunity.

In an article about a month ago, the data scientist Vincent Granville took a closer look at the data about the ways people search and get information. He found “The Slow Decline of Google Search”. Here are some of the highlights:

“Google’s influence (as a search engine) is declining. Not that their traffic share or revenue is shrinking, to the contrary, both are probably increasing.”

“The decline (and weakening of monopoly) is taking place in a subtle way. In short, Google is no longer the first source of information, for people to find an article, a document, or anything on the Internet.”

“What has happened over the last few years is that many websites are now getting most of their traffic from sources other than Google.”

“Google has lost its monopoly when it comes to finding interesting information on the Internet.”

“Interestingly, this creates an opportunity for entrepreneurs willing to develop a search engine.”

As the New York Times reported recently about the announcement of the new Pixel phone, Google has noticed all this and is strategically re-positioning itself as an artificial intelligence company.

What has this got to do with the Apple story?

Apple is now the most valuable company in the world. That wasn’t always so. Indeed, it almost was headed for oblivion as the chart shows. Even now, its earlier business of selling personal computers hasn’t grown that much. It was able to add to its mix of products and services in a compelling way. It is one of the great turnaround stories in business history.

That history offers a lesson for librarians. The battle against what Google originally offered has been a tough one and libraries have suffered in the eyes of many people, especially the public officials and other leaders who provide their funding.

But looking forward, libraries should consider the opportunities arising from the fact that Google’s impact on Internet users is lessening, that the shine of Google’s “do no evil” slogan has worn off in the face of greater public skepticism and that artificial intelligence – really augmented human intelligence – is now a viable, disruptive technology.

As many once great and now defunct companies, other than Apple, show, there aren’t many second chances. Libraries should take advantage of its second chance to play the role that they should
play

in a knowledge and innovation economy.

© 2017 Norman Jacknis, All Rights Reserved