Number Sense & Nonsense

As
you will know from the news media, business executives and the techno-sphere,
we are in the age of big data and analytics.
(Disclosure: I too am part of this trend with my forthcoming course on
leading change in the Applied Analytics Master’s program at Columbia University.)

For
those of us who have been practitioners of analytics, this attention is long
overdue.  But there is a certain naiveté
in the breathless stories we have all read and in many of the uses – really misuses
– of analytics that we see now.

Partly to provide a more mature understanding of analytics, Kaiser Fung,
the director of the Columbia program, has written an insightful book
titled “NumberSense”. 

image

Filled with compelling examples, the book is a
general call for more sophistication in this age of big data.  I like to
think of it as a warning that a superficial look at the numbers you
first see will not necessarily provide the most accurate picture, any
more than the first thing you see about an unpeeled onion tells you as
much as you can see once it is cut.

image

Continuing this theme in his recent book, “The End Of Average”, Todd
Rose has popularized the story of the Air Force’s misuse of averages and
rankings after World War II.  He describes how the Air Force was faced
with an inexplicable series of accidents despite nothing being wrong
with the equipment or seemingly with the training of the pilots.  The
Air Force had even gone to the effort of designing the cockpits to fit
the exact dimensions of the average pilot!

image

As Rose reports in a recent article:

“In
the early 1950s, the U.S. air force measured more than 4,000 pilots on
140 dimensions of size, in order to tailor cockpit design to the
‘average’ pilot … [But] Out of 4,063 pilots, not a single airman fit
within the average range on all 10 dimensions.  One pilot might have a
longer-than-average arm length, but a shorter-than-average leg length.
Another pilot might have a big chest but small hips.  Even more
astonishing, Daniels discovered that if you picked out just three of the
ten dimensions of size — say, neck circumference, thigh circumference
and wrist circumference — less than 3.5 per cent of pilots would be
average sized on all three dimensions.  Daniels’s findings were clear
and incontrovertible.  There was no such thing as an average pilot.
If you’ve designed a cockpit to fit the average pilot, you’ve actually
designed it to fit no one.”

Rose criticizes the very popular
one-dimension rankings and calls for an understanding of the full
complexity, the multi-dimensional nature of human behavior and
performance.  As a Harvard Professor of Education, he puts special
emphasis on the misleading rankings that every student faces.

He shows three ways that these averages can mislead by not recognizing that:

  1. The
    one number used to rank someone actually represents multiple dimensions
    of skills, personality and the like. Two people can have the same
    score, but actually have a very different set of attributes.
  2. Behavior and skill change depending upon context.
  3. The
    path to even the same endpoint can be different for two people. While
    they may look the same when they get there, watching their progress
    shows a different picture.  He provides, as an example, the various
    patterns of infants learning to walk.  Eventually, they all do learn,
    but many babies do not follow any standard pattern of doing so.

It
is not too difficult to take this argument back to Michael Lewis’s
portrayal in Moneyball of the way that the Oakland A’s put together a
successful roster by not selecting those who looked like star baseball
athletes – a uni-dimensional, if very subjective, ranking.

Let’s
hope that as big data analytics mature, there are more instances of
Moneyball sophistication and less of the academic rankings that Rose
criticizes.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/143113012009/number-sense-nonsense]

Analytics And Organizational Change

The accumulation of data is increasing fast – from wearables, the widespread deployment of sensors in physical locations and the ever increasing use of the Internet by people. 

And someone somehow has to figure it all out.  As a result, data scientists are in demand and analytics is a hot new field of study.  On top of long standing statistical methods, there has been impressive progress recently in machine learning, artificial intelligence and new computer system architectures.

Yet, the use of analytics itself has not had as great an impact on many organizations as the data scientists have hoped.  Some of the failures of analytics were really failures of implementation. 

Perhaps the most public of these was the great Netflix million-dollar prize for a new recommendation engine.  From a purely technical viewpoint, the winning team did exactly what was asked for – create a significantly better engine than the one Netflix had been using.  Nevertheless, Netflix ended up not using their work.  That’s an issue of implementation and integrating the product of analytics into an organization.

Being able to predict behavior or even generate new insights from all this data is one thing.  As with Netflix, having people and organizations actually use that knowledge is another.  Like many other new technologies, adoption is as much a question of managing change as it is developing the technology itself.

This surely bothers some data scientists.  After all, they have a better mousetrap – why aren’t people using it?  Being able to think quantitatively, they can prepare quite convincing business cases with impressive ROI statistics and yet even that isn’t enough to get executives to budge.  But changing an organization isn’t simple no matter how good your arguments.

image

Despite this background, there has been very little overlap between the courses that prepare data scientists and the courses that prepare change agents in organizations. 

Later this year, I’ll be doing something to help align these two fields to improve the success of both.  I’ll be teaching an online course on analytics and leading change.  It will be part of Columbia University’s new executive graduate program in Applied Analytics.

image

We’ll be reviewing what is known about successfully introducing changes into an organization from the classics on the subject that were written as much as twenty years ago to more recent research.  The course will, of course, help its students understand how to get an analytics initiative started.  More important, it will focus on how to sustain, over the long run, both analytics and the changes it informs.

Thinking about the long run, there are three facets of the relationship between analytics and organizational change.

  •  The use of analytics as a part of everyday decision making and the rules of operation in a business – this is the obvious first thing everyone thinks of.
  • The use of analytics to help better implement the changes that its insights imply – a kind of a meta-analysis.
  • The continuing interaction between analytics and change to finally achieve the long desired goal of an organization that learns how to continually optimize itself – this is something of great strategic value to any business.

As the course develops, I’ll be posting more about each topic.

© 2016 Norman Jacknis, All Rights Reserved
[http://njacknis.tumblr.com/post/136679679430/analytics-and-organizational-change]

Is Open Data Good Enough?

Last week, on April 16th, the Knowledge Society Forum of the Eurocities group held its Beyond Data event in Eindhoven, the Netherlands.  The members of the KSF consists out of more than 50 policy makers focused on Open Data, from Europe.  They were joined by many other open data experts and advocates.

I led off with the keynote presentation.  The theme was simple: we need to go beyond merely opening (i.e., releasing) public data and there are a variety of new technologies that will make the Open Data movement more useful to the general public.

Since I was speaking in my role as Senior Fellow of the Intelligent Community Forum (ICF), I drew a parallel between that work and the current status of Open Data.  I pointed out that ICF has emphasized that an “intelligent city” is much more than a “smart city” with technology controlling its infrastructure.  What makes a community intelligent is if and how it uses that technology foundation to improve the experience of living there.

Similarly, to make the open data movement relevant to citizens, we need to go beyond merely releasing public data.   Even Hackathons and the encouragement of app developers has its limits in part because developers in private companies will try to find some way to monetize their work, but not all useful public problems have profit potential.

To create this value means focusing on data of importance to people (not just what’s easy to deliver), undertaking data analytics, following up with actions that have real impact on policies and programs and especially, engaging citizen in every step of the open data initiative.

image

I pointed out how future technology trends will improve every city’s use of its data in three ways:

1. Data collection, integration and quality

2. Visualization, anywhere it is needed

3. Analytics of the data to improve public policies and programs

For example, the inclusion of social data (like sentiment analysis) and the Internet of Things can be combined with data already collected by the government to paint a much richer picture of what is going on in a city.  In addition to drones, iBeacon, visual analyzers (like Placemeter), there are now also inexpensive, often open source, sensor devices that the public can purchase and use for more data collection.

Of course, all this data needs a different kind of management than businesses have used in the past.  So I pointed out NoSQL database management systems and Dat for real time data flow.  Some of the most interesting analytics is based on the merger of data from multiple sources, which poses additional difficulties that are beginning to be overcome through linked data and the new geospatial extension of the semantic web, GeoSPARQL.

If this data – and the results of its analysis – are to be useful, especially in real time, then data visualization needs to be everywhere.   That includes using augmented reality and even projecting results on surfaces, much like TransitScreen does.

And if all this data is to be useful, it must be analyzed so I discussed the key role of predictive analytics in going beyond merely releasing data.  But I emphasized the way that residents of a city can help in this task and cited the many people already involved in Zooniverse.  There are even tools to help people overcome their statistical immaturity, as you can see on Public Health Ontario.

Finally, the data can also be used by people to help envision – or re-envision – their cities through tools like Betaville.

Public officials have to go beyond merely congratulating themselves on being transparent by releasing data.  They need to take advantage of these technological developments and shift their focus to making the data useful to their residents – all in the overriding goal of improving the quality of life for their residents.  

image

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/117084058588/is-open-data-good-enough]

Big Data, Big Egos?

By now, lots of
people have heard about Big Data, but the message often comes across as another
corporate marketing phrase and a message with multiple meanings.  That may be because people also hear from
corporate executives who eagerly anticipate big new revenues from the Big Data
world.

However, I
suspect that most people don’t know what Big Data experts are talking about,
what they’re doing, what they believe about the world, and the issues arising
from their work.

Although it was
originally published in 2013, the book “Big Data: A Revolution That Will
Transform How We Live, Work, And Think” by Viktor Mayer-Schönberger and Kenneth Cukier is perhaps the best recent
in-depth description of the world of Big Data.

For people like
me, with an insatiable curiosity and good analytical skills, having access to
lots of data is a treat.  So I’m very
sympathetic to the movement.  But like
all such movements, the early advocates can get carried away with their
enthusiasm.  After all, it makes you feel
so powerful as I recall some bad sci fi movies.

Here then is a
summary of some key elements of Big Data thinking – and some limits to that
thinking. 

Causation and
Correlation

When
presented with the result of some analysis, we’ve often been reminded that
“correlation is not causation”, implying we know less than we think if all we
have is a correlation.

For
many Big Data gurus, correlation is better than causation – or at least finding
correlations is quicker and easier than testing a causal model, so it’s not
worth putting the effort into building that model of the world.  They say that causal models may be an outmoded
idea or as Mayer-Schönberger
and Cukier say, “God is dead”.  They add
that “Knowing what, rather than why, is good enough” – good enough, at least,
to try to predict things.

This
isn’t the place for a graduate school seminar on the philosophy of science, but
there are strong arguments that models are still needed whether we live in a
world of big data or not.

All The Data, Not Just
Samples

Much
of traditional statistics dealt with the issue of how to draw conclusions about
the whole world when you could only afford to take a sample.  Big data experts say that traditional
statistics’ focus is a reflection of an outmoded era of limited data. 

Indeed, an example is a 1975 textbook that
was titled “Data Reduction: Analysing and Interpreting Statistical Data”. While
Big Data provides lots more opportunity for analysis, it doesn’t overcome all
the weaknesses that have been associated with statistical analysis and
sampling.  There can still be measurement
error.  Big Data advocates say the sheer
volume of data reduces the necessity of being careful about measurement error,
but can’t there still systematic error?

Big Data gurus say that they include all the data, not just a sample.  But, in a way, that’s clearly an
overstatement.  For example, you can
gather all the internal records a company has about the behavior and breakdowns
of even millions of devices it is trying to keep track of.  But, in fact, you may not have collected all
the relevant data.  It may also be a
mistake to assume that what is observed about even all people today will
necessarily be the case in the future – since even the biggest data set today
isn’t using tomorrow’s data.

More Perfect Predictions

The
Big Data proposition is that massive volumes of data allows for almost perfect
predictions, fine grain analysis and can almost automatically provide new
insights.  While these fine grain
predictions may indicate connections between variables/factors that we hadn’t
thought of, some of those connections may be spurious.  This is an extension of the issue of
correlation versus causation because there is likely an increase in spurious
correlations as the size of the data set increases.

If
Netflix recommends movies you don’t like, this isn’t a big problem.  You just ignore them.  In the public sector, when this approach to
predicting behavior leads to something like racial profiling, it raises legal
issues.

It
has actually been hard to find models that achieve even close to perfect
predictions – even the well-known stories about how Farecast predicted the best
time to buy air travel tickets or how Google searches predicted flu outbreaks.  For a more general review of these
imperfections, read Kaiser Fung’s “Why
Websites Still Can’t Predict Exactly What You Want
” published in Harvard
Business Review last year. 

Giving It All Away

Much
of the Big Data movement depends upon the use of data from millions – billions?
– of people who are making it available unknowingly, unintentionally or at
least without much consideration.

Slowly,
but surely, though, there is a developing public policy issue around who has
rights to that data and who owns it. 
This past November’s Harvard
Business Review
– hardly a radical fringe journal – had an article that
noted the problems if companies continue to assume that they own the
information about consumers’ lives.  In
that article, MIT Professor Alex Pentland proposes a “New Deal On Data”. 

So Where Does This Leave
Us?

Are
we much better off and learning much more with the availability of Big Data,
instead of samples of data, and the related ability of inexpensive computers
and software to handle this data? 
Absolutely, yes!

As
some of the big egos of Big Data claim, is Big Data perfect enough to withhold
some skepticism about its results? Has Big Data become the omniscient god? – Not
quite yet.

image
image

©
2015 Norman Jacknis

[http://njacknis.tumblr.com/post/110070952204/big-data-big-egos]