Prediction Markets To Predict Behavior?

In previous elections, prediction markets were relatively accurate and were touted as competitors to public opinion polling. So how did they do this time?

The Iowa Electronic market had two prediction markets concerning the Presidential election. One was for the percentage of the popular two-party vote, which over the course of betting predicted Clinton 50% and Trump 48%. [These were individual contracts, which may be why the numbers add up to more than 100.] According to the most recent actual vote count, the result of the two-party split was Clinton 51% and Trump 49%.

The other was for the winner of the popular vote, which over the course of betting was 97% for Clinton and 1% for Trump. This was correct as current estimates show her getting over two million more votes than him. 

Alas, winning the popular vote wasn’t enough this time and this was where the prediction markets seem to have run into a problem.

In one of the few markets that focused on electoral votes, a German betting market ended up predicting Clinton 300, Trump 237. (The real result was almost the reverse.)

PredictWise’s betting market had Clinton “winning” with an 86% probability. (In their defense, of course, that also means a 14% chance for Trump, which has to happen some time if we’re talking probability, not certainty, after all.)

image

The folks at the Campaign Workshop observed:

“Polls aren’t perfect, but neither are political betting markets. Since these markets have gained credibility in predicting elections, they have started taking changes in public opinion polls less seriously. Overconfidence in betting markets makes the markets look misleadingly stable, and that false sense of stability makes it harder for them to predict events that shake up the status quo — such as the outcome of the Brexit referendum, or Trump’s success in the Republican presidential nomination process. As Rothschild himself has pointed out, ‘prediction markets have arrived at a paradoxical place: Their reliability, the very source of their prestige, is causing them to fail.’ ”

In looking at these markets and, more generally, crowd predictions of events, it’s worth going back to James Surowiecki’s book, “The Wisdom of Crowds”. He described both the rationale for prediction markets — which have been well publicized — and the characteristics of accurate prediction markets — which have received less emphasis.

“The premise is that under the right circumstances, the collective judgment of a large group of people will generally provide a better picture of what the future might look like than anything one expert or even a small group of experts will come up with. … [Prediction markets] work much like a futures market, in which the price of a contract reflects the collective day-to-day judgment either on a straight number—for instance, what level sales will reach over a certain period—or a probability—for example, the likelihood, measured as a percentage, that a product will hit a certain milestone by a certain date.”

“[F]or a crowd to be smart, it needs to satisfy certain criteria. It needs to be diverse, so that people are bringing different pieces of information to the table. It needs to be decentralized, so that no one at the top is dictating the crowd’s answer. It needs to summarize people’s opinions into one collective verdict. And the people in the crowd need to be independent, so that they pay attention mostly to their own information and don’t worry about what everyone around them thinks.”

image

Did the prediction markets in 2016 meet Surowiecki’s criteria? Not really.

One problem with betting markets is that they are not diverse, not representative of a broad spectrum of the population. As a CNBC report noted: “Another issue that may have contributed to the miss [on Brexit and now the US election] is the relatively similar mindset among bettors generally.”

Since all bettors can see what others seeing, it’s hard to argue that their judgments are independent. And while, in a way, the decisions are decentralized, to the extent they mirror the current polling results and news reports from national media, there is less decentralization.

So do we just decide that the results of this year’s election call into question the value of crowd predictions? I think not.

But rather than focusing on predicting who wins the White House or the Super Bowl or the number of coins in a large bottle, there is another use of prediction markets for business and government leaders — testing the likelihood that people will respond positively to a new program or offer.

No matter how much market research (aka polling) is done, it is often difficult to assess how the public will react to a proposed program. I’m suggesting that prediction markets be used to estimate the reaction ahead of time, as long as they match Surowiecki’s criteria and don’t depend on money bets. At the very least, this would require a large and diverse set of people responding and keeping their judgments secret (until “voting” stops).

Over the last year or so, there have been several reports that rates for Affordable Care (aka Obamacare) had to be raised because there are fewer young, healthy people enrolling than expected. Putting aside the merits of the policy and its goals, this is an ideal case where prediction markets could have helped assess the accuracy of an underlying assumption about the implementation of a very consequential piece of public policy.

Some experts are skeptical of prediction markets because the average person doesn’t have professional expertise. But this use of prediction markets draws on the perceptions of people about each other.

Implicit in the diversity of views that Surowiecki notes is that enough people need to care about the planned program or policy. The reason they care may be to win money, in some cases, but that’s not the only reason. They might care because the market deals with something that affects their lives.

And the nice thing about this is that if only a few people care about a planned program that also tells you something about that plan — or, at least, whether the range of outcomes might be something between a yawn and deep trouble.

It may well be that this more experimental basis to predict behavior will illustrate the deeper value of prediction markets. What do you think?.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/153863533555/prediction-markets-to-predict-behavior]

What Do We Know About Change?

[This is a follow up to my post last week.]

Even if we understand that what seems like resistance to change is more nuanced and complicated, many of us are directly or implicitly being asked to lead the changes in places of work. In that sense, we are “change agents” to use a well-established phrase.

Consider the number of times each day, both on the job and outside, that we hear the word “change” and the necessity for leaders to help their organizations change in the face of all sorts of challenges.

There has been a slew of popular business books providing guidance to would-be change agents. Several consultants and business gurus have developed their own model of the change process, usually outlining some necessary progression and steps that they have observed will lead to success.

Curiously, the same few anecdotes seem to pop up in a number of these, like burning platforms or the boardroom display of gloves.

While these authors mean well and have tried to be good reporters of what they have observed, change agents often find that, in practice, the suggestions in these books and articles are at best a starting point and don’t quite match the situation they face.

Part of the problem is that there has been too little rigorous behavioral work about how and why people change. (In fairness, some authors, like the Heath brothers, at least try to apply behavioral concepts in their recommendations on how to lead change.)

And on a practical level, many change agents find it difficult to figure out the tactics they need to use to improve the chances that the desired change will occur. In this post, I’m suggesting that we first need to understand the unique and sometimes unexpected ways that the human brain processes information and thus how we need to communicate.

(These are often called cognitive biases, but that is a pejorative phrase that might put you in the wrong mindset. It’s not a good idea starting an effort to convince people to join you in changing an organization by assuming that they are somehow irrational.)

As just one example, some of the most interesting work along these lines was that done by the Nobel-prize winning psychologist Daniel Kahneman and his colleague Amos Tversky.

They found in their research that people exaggerate potential losses beyond reality – often times incorrectly guessing that what they control (like driving a car) is less risky than what they don’t control (being a passenger in an airplane).

Moreover, a person’s sense of loss is greater if what might be lost has been owned or used for a long time (aka entitlements). Regret and other emotions can also enhance this sense of loss.

The estimate of losses and gains is also affected by a person’s reference point, which can be shifted by even random effects. The classic example of the impact of a reference point is how people react differently to being told either that they have a 10% chance of dying or a 90% chance of living through a major disease. The probabilities are the same, of course.

In general, they found that there is an aversion to losses which outweighs possible gains, even if the gains might be worth more.  

This makes it sound like change is very difficult, since many people often perceive proposed changes as having big risks.

But there is more to the story. Indeed, Kahneman found that there is no across-the-board aversion to change or even merely to risk. Indeed people might make a more risky choice when all options are bad.

As one summary states:

“When faced with a risky prospect, people will be: (1) risk-seeking over low-probability gains, (2) risk-averse over high-probability gains, (3) risk-averse over low-probability losses, and (4) risk-seeking over high-probability losses.”

In just this brief summary, there is some obvious guidance for change agents:

  • Reduce people’s estimate of their potential loss. For example, the new system won’t cost 25% more than the old one, but it will just be an extra nickel each time it is used.
  • Increase the perceived value of the change and/or the perceived likelihood of success – positive vivid images help to overcome lower probability estimates of the chances of success; negative vivid images help to magnify the probability of loss.
  • Help people redefine the perception of loss by shifting their frame of reference, which determines their starting point.
  • Reduce the overall size of the risks, which means it is best to introduce small innovations, piled on each other.  Behavioral scientists have also observed the irrational fear of loss versus the possibility of benefit is reduced when a person has had experience with the trade-off. A series of small innovations will help people to gain that experience and you will also find out which of your great ideas really are good. Since any innovation is an experiment, there’s no guarantee of success. Some will fail, but if the ideas are good and competent people are implementing the changes, you’ll succeed sufficiently more often than you fail so that the overall impact is positive.
  • Work to convince people that their certainty of loss is only a possibility. People react differently to being told something is a sure thing, than a 90% probability.
  • Since risk taking is no longer avoided among bad choices, show that the obvious loss of change is less than a bigger possible loss of not changing.

I’ve just touched the surface here. There other findings of behavioral and social science research that can also enable change agents to get a firmer grasp on the reality of the situation facing them and suggest things they might do to become more successful.

© 2016 Norman Jacknis, All Rights Reserved