Eating Our Own Cooking

What do a doctor suggesting a patient diet, an IT person asking a fellow worker to start using a new application, a mom asking a child to eat something or an analytics expert asking a business person to trust statistical insights all have in common? They are trying to get other people to do things.

But as the old saying goes, too often, this suggestion comes across as “do as I say, not as I do.” We’ve heard other variations on this theme, such as “doctor, heal thyself” or my favorite, “eat your own cooking”.

image

When I was a CIO, I used to tease some of the staff by pointing out they were all too willing to tell everyone else that they should use some new technology to do their jobs, but when it came to new technology in the IT world, the IT staff was the most resistant to change.

Among the most important changes that need to happen in many organizations today are those based on new insights about the business from analytics. But in big organizations, it is difficult to know how well those necessary changes are being adopted.

Analytics can help with this problem too. Analytics tools can help to figure out what message about a change is getting across and how well is the change being adopted? In which offices, regions, kinds of people?

Yet, it is rare for analytics folks to use their tools to help guide their own success in getting analytics to be adopted. Here, though, are two examples, the first about individuals and the second about organizations as a whole.

Individual Willingness To Change

A couple of years ago, the Netherlands branch of Deloitte created a Change Adoption Profiler (CAP) model of their clients’ employees based on the willingness to adopt changes. As they describe it:

“Imagine being able to predict who will adopt change, and how they will adopt it before the change has even occurred. At Deloitte, we have developed a data driven decision making method called the Change Adoption Profiler – it provides insights into your company’s attitude toward change and allows you to address it head on.

“The CAP uses a diagnostic survey based on personal characteristic and change attitudes. Unlike traditional questionnaires CAP combines this with behavioral data to understand the profiles that exist within the organization. The CAP provides reliable, fact-based analytics – provides client insights that support smart decision making, reveals risks and signals how to approach change at an early stage.”

image

There is a nice little video that summarizes its work at https://www.youtube.com/watch?v=l12MQFCLoOs

Sadly, so far as I can tell from the public web, no other office of Deloitte is using this model in its work.

Organizational Analysis

Network analysis, especially social network analysis, is not a new focus of those in analytics. But, again, they don’t normally use network analysis to understand how well changes are being spread through an organization or business ecosystem.

One of the exceptions is the Danish change management consulting firm, Innovisor. They put particular emphasis on understanding the real organization – how and why people interact with each other – instead of relying solely or mostly on the official organization chart.

image

This little video explains their perspective – https://www.youtube.com/watch?v=ncXcvuSwXFM

In his blog post, Henry Ward, CEO of eShares, writes at some length about his company’s use of this network analysis to determine who were the real influencers in the organization. They ended up identifying 9 employees, not necessarily executives, who influenced 70% of all employees directly and 100% through a second connection. A detailed presentation can be found at https://esharesinc.box.com/shared/static/8rrdq4diy3kkbsyxq730ry8sdhfnep60.pdf

image

Given the value of these kinds of examples, the problem of not eating your own lunch is especially interesting in the analytics field. Perhaps it is new enough that many of its advocates still have the zeal of being part of an early, growing religion and can’t see why others might resist their insights. But they would be convincing if they could show how, with analytics, they did their own jobs better – the job of getting their organizations to achieve the potential of their insights.

Books That Link Analytics, Big Data And Leading Change

Last week, at the end of my class in Analytics and Leading Change, one of the required courses in Columbia University’s Masters Program in Applied Analytics, my students asked for books I’d recommend that provide more detail than we could cover in the course. It turns out that others are also interested in a good library of books about analytics from the viewpoint of an organization’s leaders.

You’ll see that these are not textbooks about analytics or machine learning techniques – there are plenty of those. Instead, this reading list is the next step for those folks who understand the techniques and now want the insights from their work to have an impact on and provide value to their world.

Although most of these books were published in the last decade, there are also some classics on the list going back fifty years. And I’ve chosen mostly popular books because frankly they are written in a compelling way that is accessible to all leaders.

With that introduction, here are my recommendations.

1.     On the experience of doing analytics and seeing its impact:

Moneyball by Michael Lewis

The movie, Moneyball, starred Brad Pitt as the hero of the first and most storied use of analytics in professional baseball. For people in the field of analytics, what could be better than a movie about your skills helping the underdog. But like all movies, it tended to gloss over or exaggerate situations for the benefit of a good, simple plot.

The book that Lewis wrote originally is subtler and is a good case study of the human side of introducing analytics in a tradition-bound field. Tying it all up, his more recent book, The Undoing Project: A Friendship that Changed Our Minds, is the story of the collaboration between Kahneman (see below) and Tversky.

The Signal and The Noise: Why So Many Predictions Fail — But Some Don’t by Nate Silver

Nate Silver is probably the best-known analytics practitioner by those not in the business themselves, due to his work over the years, especially for the New York Times and in relation to high visibility elections. This is his review of the ups and downs in using analytics, offering lessons especially from sports and politics.

Victory Lab: The Secret Science of Winning Campaigns by Sasha Issenberg

Although sometimes a bit over the top and now five years old, it is a thorough description of the use of analytics in election campaigns. Election campaigns are good examples of analytics because they are both well-known and there is a huge amount of data concerning elections and the voters who determine their outcomes.

Dataclysm: Love, Sex, Race, and Identity — What Our Online Lives Tell Us about Our Offline Selves by Christian Rudder

The author is the co-founder and former analytics lead for OkCupid. Not surprisingly, much of the book is about dating choices, but he goes way beyond that to uncover insights about various social attitudes, including racism, from the large amount of data he had in his hands both at his former company and elsewhere.

How Not To Be Wrong: The Power Of Mathematical Thinking by Jordan Ellenberg

Since analytics is essentially a mathematical art, Ellenberg’s book about mathematical thinking is important preparation for the field. It also provides numerous examples of how to present quantitative insights in a way that non-experts would understand.

2.    On expanding the normal range of analytics:

Unobtrusive Measures: Nonreactive Research in the Social Sciences by Eugene Webb, et al

I’ve added this fifty year old classic to the list because even in a world of big data we don’t necessarily have all the data we need, either in our computer systems or in the physical world.  This book reminds us to observe indications of phenomenon that are not already available – such as the influence of an individual measured by the wear and tear on the entry to his/her office space. It also points out the need to always include metadata in our analysis since that is often revealing.

How to Measure Anything: Finding the Value of Intangibles in Business by Douglas Hubbard

Somewhat picking up the same theme, this book helps both the business executive and the analytics practitioner to be more creative in measurement, especially when it comes to things that people haven’t so far been able to offer good metrics for.

Connected: The Surprising Power of Social Networks and How They Shape Our Lives by Nicholas A. Christakis and James H. Fowler

This is a book about how social networks influence us in ways we hadn’t considered before. As they say: “How your friends’ friends’ friends affect everything you think, feel and do.” I suppose a good example is how their observation that you’ll gain weight by being connected to overweight people in a social network has itself become a meme. In its own way, this book is an interesting work of analytics.

Just as important is its elaboration of how to study social networks since an understanding of the network of influencers in any organization is essential to anyone who wants to change the behavior of the people in that organization.

Storytelling with Data: A Data Visualization Guide for Business Professionals by Cole Nussbaumer Knaflic

The author was part of Google’s analytics team, which is the analytics equivalent of working at the Vatican if you’re a Roman Catholic theologian.  Her emphasis in on how to show the insights of analytics work and to tell a story about those insights. In a world of all kinds of data visualization tools and fads, her advice is clear and evidence-based.

3.    On the way that the human mind perceives the insights of analytics and might or might change as a result:

Payoff: The Hidden Logic That Shapes Our Motivations by Dan Ariely

Professor Ariely, formerly of MIT and now at Duke, is one of the more creative experimenters in psychology and he quickly reviews both his own and others’ research results. The theme of this short book is that the payoff which often makes a difference in human behavior is not necessarily a financial reward and that sometimes financial incentives even backfire. This is important for leaders of change in organizations, particularly big corporations, to understand.

Thinking, Fast And Slow by Daniel Kahneman

I’ve written about the work of Nobel Prize winner and Princeton Professor Kahneman before, most recently in “What Do We Know About Change”. This describes what Kahneman has learned from a lifetime of research about thinking and decision making. His work on how people process – distort – quantitative statements is especially relevant to analytics experts who need understand the cognitive biases he describes.

Switch: How to Change Things When Change Is Hard by Chip Heath and Dan Heath

The Heath brothers, popular business writers, have done a good job in this book of explaining what’s been learned in recent psychological research – see Kahneman and Ariely, for instance – without dumbing it down so much that the key points are lost. In doing that well, they also provide the leader of change and analytics some good ideas on how to present their own results and getting their organizations to switch to a more analytics-oriented outlook.

4.     On the strategic linkage between leading change and analytics

The Dance of Change by Peter Senge, et al

This is another classic that goes beyond the usual cookbook approach found in most books on “change management”. Yet, Senge and his colleagues anticipated the more recent approaches to change management which is about something more than just getting a single project done. For Senge, the goal he established was to help create learning organizations. While he does not focus on analytics, this book should particularly resonate with analytics professionals since they now have the tools to take that learning to new and more useful levels than in the past.

I could easily expand this list, as could many others, but this “baker’s dozen” books will provide a good rounded education to start.

© 2017 Norman Jacknis, All Rights Reserved @NormanJacknis

Campaign Analytics: What Separates The Good From The Bad

Donald Trump, as a candidate for President last year, expressed great skepticism about the use of analytics
in an election campaign.  Hillary Clinton made a big deal about her campaign’s use of analytics. Before that, President Obama’s campaigns received great credit for their analytics.

If you compare these experiences, you can begin to understand what separates good from bad in campaign analytics.

Let’s start with the Clinton campaign, whose use of analytics was breathlessly reported, including this Politico story about “Hillary’s Nerd Squad” eighteen months before the election.

However, a newly released book, titled Shattered, provides a kind of autopsy of the campaign and its major weaknesses. A CBS News review of the book highlighted this
weakness in particular:

“Campaign manager Robby Mook put a lot of faith in the campaign’s computer algorithm, Ada, which was supposed to give them a leg up in turning out likely voters. But the Clinton campaign’s use of the highly complex algorithm focused on ensuring voter turnout, rather than attracting voters from across party lines.

“According to the book, Mook was insistent that the software would be revered as the campaign’s secret weapon once Clinton won the White House. With his commitment to Ada and the provided data analytics, Mook often butted heads with Democratic Party officials, who were concerned about the lack of attention in persuading undecided voters in Clinton’s favor.  Those Democratic officials, as it turned out, had a point.”

image

Of course, this had become part of the conventional wisdom since the day after the election. For example, on November 9, 2016, the Washington Post had a story “Clinton’s data-driven campaign relied heavily on an algorithm named Ada. What didn’t she see?”:

“Ada is a complex computer algorithm that the campaign was prepared to publicly unveil after the election as its invisible guiding hand … the algorithm was said to play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads … The campaign’s deployment of other resources — including county-level campaign offices and the staging of high-profile concerts with stars like Jay Z and Beyoncé — was largely dependent on Ada’s work, as well.”

But the story had another point about Ada:

“Like the candidate herself, she had a penchant for secrecy and a private server … the particulars of Ada’s work were kept under tight wraps, according to aides. The algorithm operated on a separate computer server than the rest of the Clinton operation as a security precaution, and only a few senior aides were able to access it.”

While the algorithm clearly wasn’t the only or perhaps even the most important reason for the failure of the campaign, that last piece illustrates why the Clinton use of analytics wasn’t more successful. It had in common with many other failed analytics initiatives an atmosphere of secretiveness and arrogance – “we’re the smartest guys around here” so let us do our thing.

The successful uses of analytics in campaigns or elsewhere try to use (and then test) the best insights of the people with long experience in a field. They will even help the analyst look at the right questions –
in the case of the Clinton campaign, converting undecided voters

The best analytics efforts are a two-way conversation that helps the “experts” to understand better which of their beliefs are still correct and helps the analytics staff to understand where they should be looking for predictive factors.

Again, analytics wasn’t the only factor that led to President Obama’s winning elections in 2008 and 2012, but the Obama campaign’s use of analytics felt different than Clinton’s. One article went “Inside the Obama Campaign’s Big Data Analytics Culture” and described “an archetypical story of an analytics-driven organization that aligned people, business processes and technologies around a clear mission” instead of focusing on the secret sauce and a top-down, often strife-filled, environment.

image

InfoWorld’s story about the 2012 campaign described a widely dispersed use of analytics –

“Of the 100 analytics staffers, 50 worked in a dedicated analytics department, 20 analysts were spread throughout the campaign’s various headquarters, and another 30 were in the field interpreting the data.” So, there was plenty of opportunity for analytics staffers to learn from others in the campaign.

And the organizational culture was molded to make this successful as well –

“barriers between disparate data sets – as well as between analysts – were lowered, so everyone could work together effectively. In a nutshell, the campaign sought a friction-free analytic environment.”

Obama’s successful use of analytics was a wake-up call to many politicians, Hillary Clinton included. But did they learn all the lessons of his success? Apparently not.

Coming back to the 2016 election, there is then the Trump campaign. Despite the candidate’s statements, his campaign also used analytics, employing Cambridge Analytica, the British firm that helped the Brexit forces to win in the UK. Thus, 2016 wasn’t as much of a test of analytics vs. no analytics as has sometimes been reported.

image

But, if an article, “The great British Brexit robbery: how our democracy was hijacked”, published two weeks ago in the British newspaper, the Guardian, is even close to the mark, there is a different question about the good and bad uses of analytics in both the Trump and Brexit campaigns. In part scary and perhaps in others too jaundiced, this story raises questions for the future – as analytic tools get better, will the people using those tools realize they face not only technical challenges.

The good and bad use of analytics will not just be a question as to whether the results are being executed well or poorly – whether the necessary changes and learning among all members of an organization take place. But it will also be a question whether analytics tools are being used in ways that are good or bad in an ethical sense.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

Analytics And Leading Change

Next week, I’m teaching the summer semester version of my Columbia University course called Analytics and Leading Change for the Master’s Degree program in Applied Analytics. While there are elective courses on change management in business and public administration schools, this combination of analytics and change is unusual. The course is also a requirement. Naturally, I’ve been why?

The general answer is that analytics and change are intertwined.

Successfully introducing analytics into an organization shares all the difficulties of introducing any new technology, but more so. The impact of analytics – if successful – requires change, often deep change that can challenge the way that executives have long thought about the effect of what they were doing.

As a result, often the reaction to new analytics insights can be a kneejerk rejection, as one Forbes columnist asked last year in an article titled “Why Do We Frequently Question Data But Not Assumptions?”.

A good, but early example of the impact of what we now call “big data”, goes back twenty-five years ago to the days before downloaded music.

Back then, the top 40 selections of music on the “air” were based on what radio DJs (or program directors) chose and, beyond that, the best information about market trends came from surveys of ad hoc observations by record store clerks.  Those choices too emphasized new mainstream rock and pop music.

In 1991, in one of the earliest big data efforts in retail, a new company, SoundScan, came along and collected data from automated sales registers in music. What they found went against the view of the world that was then widely accepted – 
and instead

old music, like Frank Sinatra, and genres others than rock were very popular.

Music industry executives then had to change the way they thought about the market and many of them didn’t. This would happen again when streaming music came along. (For more on this bit of big data history, see https://en.wikipedia.org/wiki/Nielsen_SoundScan and http://articles.latimes.com/1991-12-08/entertainment/ca-85_1_sales-figures .)

A somewhat more recent example is the way that insights from analytics have challenged some of the traditional assumptions about motivation that are held by many executives and many staff in corporate human resource departments. Tom Davenport’s Harvard Business Review article in 2010 on “Competing on Talent Analytics” provides a good review of what can be learned, if executives are willing to learn from analytics.

The first, larger lesson is: If the leaders of analytics initiatives don’t understand the nature of the changes they are asking of their colleagues, then those efforts will end up being nice research reports and the wonderful insights generated by the analysts will disappear without impact or benefit to their organizations.

The other side of the coin and the second reason that analytics and change leadership are intertwined is a more positive one. Analytics leaders have a potential advantage over other “change agents” in understanding how to change an organization. They can use analytics tools to understand what they’re dealing with and thus increase the likelihood that the change will stick.

For instance, with the rise of social networks on the internet, network analytics methods have developed to understand how the individuals in a large group of people influence each other. Isn’t that also an issue in understanding the informal, perhaps the real, structure of an organization which the traditional organization charts don’t illuminate?

In another, if imperfect example, the Netherlands office of Deloitte created a Change Adoption Profiler to help leaders figure out the different reactions of people to proposed changes.

Unfortunately, leaders of analytics in many organizations too infrequently use their own tools to learn what they need to do and how well they are doing it. Pick your motto about this – “eat your own lunch (or dogfood)” or “doctor heal thyself” or whatever – but you get the point.

© 2017 Norman Jacknis, All Rights Reserved. @NormanJacknis

What Do We Know About Change?

[This is a follow up to my post last week.]

Even if we understand that what seems like resistance to change is more nuanced and complicated, many of us are directly or implicitly being asked to lead the changes in places of work. In that sense, we are “change agents” to use a well-established phrase.

Consider the number of times each day, both on the job and outside, that we hear the word “change” and the necessity for leaders to help their organizations change in the face of all sorts of challenges.

There has been a slew of popular business books providing guidance to would-be change agents. Several consultants and business gurus have developed their own model of the change process, usually outlining some necessary progression and steps that they have observed will lead to success.

Curiously, the same few anecdotes seem to pop up in a number of these, like burning platforms or the boardroom display of gloves.

While these authors mean well and have tried to be good reporters of what they have observed, change agents often find that, in practice, the suggestions in these books and articles are at best a starting point and don’t quite match the situation they face.

Part of the problem is that there has been too little rigorous behavioral work about how and why people change. (In fairness, some authors, like the Heath brothers, at least try to apply behavioral concepts in their recommendations on how to lead change.)

And on a practical level, many change agents find it difficult to figure out the tactics they need to use to improve the chances that the desired change will occur. In this post, I’m suggesting that we first need to understand the unique and sometimes unexpected ways that the human brain processes information and thus how we need to communicate.

(These are often called cognitive biases, but that is a pejorative phrase that might put you in the wrong mindset. It’s not a good idea starting an effort to convince people to join you in changing an organization by assuming that they are somehow irrational.)

As just one example, some of the most interesting work along these lines was that done by the Nobel-prize winning psychologist Daniel Kahneman and his colleague Amos Tversky.

They found in their research that people exaggerate potential losses beyond reality – often times incorrectly guessing that what they control (like driving a car) is less risky than what they don’t control (being a passenger in an airplane).

Moreover, a person’s sense of loss is greater if what might be lost has been owned or used for a long time (aka entitlements). Regret and other emotions can also enhance this sense of loss.

The estimate of losses and gains is also affected by a person’s reference point, which can be shifted by even random effects. The classic example of the impact of a reference point is how people react differently to being told either that they have a 10% chance of dying or a 90% chance of living through a major disease. The probabilities are the same, of course.

In general, they found that there is an aversion to losses which outweighs possible gains, even if the gains might be worth more.  

This makes it sound like change is very difficult, since many people often perceive proposed changes as having big risks.

But there is more to the story. Indeed, Kahneman found that there is no across-the-board aversion to change or even merely to risk. Indeed people might make a more risky choice when all options are bad.

As one summary states:

“When faced with a risky prospect, people will be: (1) risk-seeking over low-probability gains, (2) risk-averse over high-probability gains, (3) risk-averse over low-probability losses, and (4) risk-seeking over high-probability losses.”

In just this brief summary, there is some obvious guidance for change agents:

  • Reduce people’s estimate of their potential loss. For example, the new system won’t cost 25% more than the old one, but it will just be an extra nickel each time it is used.
  • Increase the perceived value of the change and/or the perceived likelihood of success – positive vivid images help to overcome lower probability estimates of the chances of success; negative vivid images help to magnify the probability of loss.
  • Help people redefine the perception of loss by shifting their frame of reference, which determines their starting point.
  • Reduce the overall size of the risks, which means it is best to introduce small innovations, piled on each other.  Behavioral scientists have also observed the irrational fear of loss versus the possibility of benefit is reduced when a person has had experience with the trade-off. A series of small innovations will help people to gain that experience and you will also find out which of your great ideas really are good. Since any innovation is an experiment, there’s no guarantee of success. Some will fail, but if the ideas are good and competent people are implementing the changes, you’ll succeed sufficiently more often than you fail so that the overall impact is positive.
  • Work to convince people that their certainty of loss is only a possibility. People react differently to being told something is a sure thing, than a 90% probability.
  • Since risk taking is no longer avoided among bad choices, show that the obvious loss of change is less than a bigger possible loss of not changing.

I’ve just touched the surface here. There other findings of behavioral and social science research that can also enable change agents to get a firmer grasp on the reality of the situation facing them and suggest things they might do to become more successful.

© 2016 Norman Jacknis, All Rights Reserved

Resistance To Change?!

As I’ve been going through articles and books for the course on Analytics and Leading Change that I’ll be teaching soon at Columbia University, I frequently read how leaders and other change agents need to overcome resistance to change. Whenever we aim to get things done and they don’t happen immediately, this is often the first explanation for the difficulty.

Resistance to change is a frequent complaint of anyone introducing a new technology or especially something as fundamental as the use of analytics in an organization.

image

The conflict that it implies can be compelling. You could make a best seller or popular movie out of that conflict, like that great story about baseball, analytics and change “Moneyball”.

There have been cartoons and skits about resistance to change — https://www.youtube.com/watch?v=XTLyXamRvk4

This is an idea that goes very far back. Even Machiavelli, describing Renaissance politics, is often quoted on the subject:

“There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. For the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries … and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it.”

It’s all awful if you’re the one trying to introduce the change and many have written about the problems they saw.

But is that word “resistance” misleading change agents? Going beyond the perspectives and anecdotes of change agents and business consultants, there has been over the last two decades some solid academic research on this subject. And, as often happens when we learn more, there have been some important subtleties lost in that phrase “resistance to change”.

In perhaps a refutation or an elaboration on Machiavelli’s famous quote, Dent and Goldberg report in “Challenging ‘Resistance to Change’” that:

“People do not resist change, per se.  People may resist loss of status, loss of pay, or loss of comfort, but these are not the same as resisting change … Employees may resist the unknown, being dictated to, or management ideas that do not seem feasible from the employees’ standpoint. However, in our research, we have found few or no instances of employees resisting change … The belief that people do resist change causes all kinds of unproductive actions within organizations.”

Is what looks like resistance something more or something else?

More recently, University of Montreal Professor Céline Bareil wrote about the “Two Paradigms about Resistance to Change” in which she compared “the enemy of change” (traditional paradigm) to “a resource” (modern paradigm). She noted that:

“Instead of being interpreted as a threat, and the enemy of change, resistance to change can also be considered as a resource, and even a type of commitment on the part of change recipients.”

Making this shift in perspective is likely harder for change agents than the changes they expect of others. The three authors of “Resistance to Change: The Rest of the Story” describe the various ways that change agents themselves have biased perceptions. They say that blaming difficulties on resistance to change may be a self-serving and “potentially self-fulfilling label, given by change agents attempting to make sense of change recipients’ reactions to change initiatives, rather than a literal description of an objective reality.”

Indeed, they observe that the actions of change agents may not be merely unsuccessful, but counter-productive.

“Change agents may contribute to the occurrence of the very reactions they label as resistance through their own actions and inactions, such as communications breakdowns, the breach of agreements and failure to restore trust” as well as not listening to what is being said and learning from it.

There is, of course, a lot more to this story, which you can start to get into by looking at some of the links in this post. But hopefully this post has offered enough to encourage those of us who are leading change to take a step back, look at the situation differently and thus be able to succeed.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/152378476173/resistance-to-change]