By now, lots of
people have heard about Big Data, but the message often comes across as another
corporate marketing phrase and a message with multiple meanings. That may be because people also hear from
corporate executives who eagerly anticipate big new revenues from the Big Data
world.
However, I
suspect that most people don’t know what Big Data experts are talking about,
what they’re doing, what they believe about the world, and the issues arising
from their work.
Although it was
originally published in 2013, the book “Big Data: A Revolution That Will
Transform How We Live, Work, And Think” by Viktor Mayer-Schönberger and Kenneth Cukier is perhaps the best recent
in-depth description of the world of Big Data.
For people like
me, with an insatiable curiosity and good analytical skills, having access to
lots of data is a treat. So I’m very
sympathetic to the movement. But like
all such movements, the early advocates can get carried away with their
enthusiasm. After all, it makes you feel
so powerful as I recall some bad sci fi movies.
Here then is a
summary of some key elements of Big Data thinking – and some limits to that
thinking.
Causation and
Correlation
When
presented with the result of some analysis, we’ve often been reminded that
“correlation is not causation”, implying we know less than we think if all we
have is a correlation.
For
many Big Data gurus, correlation is better than causation – or at least finding
correlations is quicker and easier than testing a causal model, so it’s not
worth putting the effort into building that model of the world. They say that causal models may be an outmoded
idea or as Mayer-Schönberger
and Cukier say, “God is dead”. They add
that “Knowing what, rather than why, is good enough” – good enough, at least,
to try to predict things.
This
isn’t the place for a graduate school seminar on the philosophy of science, but
there are strong arguments that models are still needed whether we live in a
world of big data or not.
All The Data, Not Just
Samples
Much
of traditional statistics dealt with the issue of how to draw conclusions about
the whole world when you could only afford to take a sample. Big data experts say that traditional
statistics’ focus is a reflection of an outmoded era of limited data.
Indeed, an example is a 1975 textbook that
was titled “Data Reduction: Analysing and Interpreting Statistical Data”. While
Big Data provides lots more opportunity for analysis, it doesn’t overcome all
the weaknesses that have been associated with statistical analysis and
sampling. There can still be measurement
error. Big Data advocates say the sheer
volume of data reduces the necessity of being careful about measurement error,
but can’t there still systematic error?
Big Data gurus say that they include all the data, not just a sample. But, in a way, that’s clearly an
overstatement. For example, you can
gather all the internal records a company has about the behavior and breakdowns
of even millions of devices it is trying to keep track of. But, in fact, you may not have collected all
the relevant data. It may also be a
mistake to assume that what is observed about even all people today will
necessarily be the case in the future – since even the biggest data set today
isn’t using tomorrow’s data.
More Perfect Predictions
The
Big Data proposition is that massive volumes of data allows for almost perfect
predictions, fine grain analysis and can almost automatically provide new
insights. While these fine grain
predictions may indicate connections between variables/factors that we hadn’t
thought of, some of those connections may be spurious. This is an extension of the issue of
correlation versus causation because there is likely an increase in spurious
correlations as the size of the data set increases.
If
Netflix recommends movies you don’t like, this isn’t a big problem. You just ignore them. In the public sector, when this approach to
predicting behavior leads to something like racial profiling, it raises legal
issues.
It
has actually been hard to find models that achieve even close to perfect
predictions – even the well-known stories about how Farecast predicted the best
time to buy air travel tickets or how Google searches predicted flu outbreaks. For a more general review of these
imperfections, read Kaiser Fung’s “Why
Websites Still Can’t Predict Exactly What You Want” published in Harvard
Business Review last year.
Giving It All Away
Much
of the Big Data movement depends upon the use of data from millions – billions?
– of people who are making it available unknowingly, unintentionally or at
least without much consideration.
Slowly,
but surely, though, there is a developing public policy issue around who has
rights to that data and who owns it.
This past November’s Harvard
Business Review – hardly a radical fringe journal – had an article that
noted the problems if companies continue to assume that they own the
information about consumers’ lives. In
that article, MIT Professor Alex Pentland proposes a “New Deal On Data”.
So Where Does This Leave
Us?
Are
we much better off and learning much more with the availability of Big Data,
instead of samples of data, and the related ability of inexpensive computers
and software to handle this data?
Absolutely, yes!
As
some of the big egos of Big Data claim, is Big Data perfect enough to withhold
some skepticism about its results? Has Big Data become the omniscient god? – Not
quite yet.
©
2015 Norman Jacknis
[http://njacknis.tumblr.com/post/110070952204/big-data-big-egos]