Are You Looking At The Wrong Part Of The Problem?

In business, we are frequently told that to build a successful company we have to find an answer to the customer’s problem. In government, the equivalent guidance to public officials is to solve the problems faced by constituents. This is good guidance, as far as it goes, except that we need to know what the problem really is before we can solve it.

Before those of us who are results-oriented, problem solvers jump into action, we need to make sure that we are looking at the right part of the problem. And that’s what Dan Heath’s new book, “Upstream: The Quest To Solve Problems Before They Happen” is all about.

Heath, along with his brother Chip, has brought us such useful books as “Made To Stick: Why Some Ideas Survive and Others Die” and “Switch: How to Change Things When Change Is Hard”.

As usual for a Heath book, it is well written and down to earth, but contains important concepts and research underneath the accessible writing.

He starts with a horrendous, if memorable, story about kids:

You and a friend are having a picnic by the side of a river. Suddenly you hear a shout from the direction of the water — a child is drowning. Without thinking, you both dive in, grab the child, and swim to shore. Before you can recover, you hear another child cry for help. You and your friend jump back in the river to rescue her as well. Then another struggling child drifts into sight…and another…and another. The two of you can barely keep up. Suddenly, you see your friend wading out of the water, seeming to leave you alone. “Where are you going?” you demand. Your friend answers, “I’m going upstream to tackle the guy who’s throwing all these kids in the water.”

 

Going upstream is necessary to solve the problem at its origin — hence the name of the book. The examples in the book range from important public, governmental problems to the problems of mid-sized businesses. While the most dramatic examples are about saving lives, the book is also useful for the less dramatic situations in business.

Heath’s theme is strongly, but politely, stated:

“So often we find ourselves reacting to problems, putting out fires, dealing with emergencies. We should shift our attention to preventing them.”

This reminds me of a less delicate reaction to this advice: “When you’re up to your waist in alligators, it’s hard to find time to drain the swamp”. And I often told my staff that unless you took some time to start draining the swamp, you are always going to be up to your waist in alligators.”

He elaborates and then asks a big question:

We put out fires. We deal with emergencies. We stay downstream, handling one problem after another, but we never make our way upstream to fix the systems that caused the problems. Firefighters extinguish flames in burning buildings, doctors treat patients with chronic illnesses, and call-center reps address customer complaints. But many fires, chronic illnesses, and customer complaints are preventable. So why do our efforts skew so heavily toward reaction rather than prevention?

His answer is that, in part, organizations have been designed to react — what I called some time ago the “inbox-outbox” view of a job. Get a problem, solve it, and then move to the next problem in the inbox.

Heath identifies three causes that lead people to focus downstream, not upstream where the real problem is.

  • Problem Blindness — “I don’t see the problem.”
  • A Lack of Ownership — “The problem isn’t mine to fix.”
  • Tunneling — “I can’t deal with the problem right now.”

In turn, these three primary causes lead to and are reinforced by a fatalistic attitude that bad things will happen and there is nothing you can do about that.

Ironically, success in fixing a problem downstream is often a mark of heroic achievement. Perhaps for that reason, people will jump in to own the emergency downstream, but there are fewer owners of the problem upstream.

…reactive efforts succeed when problems happen and they’re fixed. Preventive efforts succeed when nothing happens. Those who prevent problems get less recognition than those who “save the day” when the problem explodes in everyone’s faces.

Consider the all too common current retrospective on the Y2K problem. Since the problem didn’t turn out to be the disaster it could have been at the turn of the year 2000, some people have decided it wasn’t real after all. It was, but the issue was dealt with upstream by massive correction and replacement of out-of-date software.

Heath realizes that it is not simple for a leader with an upstream orientation to solve the problem there, rather than wait for the disaster downstream.

He asks leaders to first think about seven questions, which explores through many cases:

  • How will you get early warning of the problem?
  • How will you unite the right people to assess and solve the problem?
  • Where can you find a point of leverage?
  • Who will pay for what does not happen?
  • How will you change the system?
  • How will you know you’re succeeding?
  • How will you avoid doing harm?

Some of these questions and an understanding of what the upstream problem really is can start to be answered by the intelligent use of analytics. That too only complicates the issue for leaders, since an instinctive heroic reaction is much sexier than contemplating machine learning models and sexy usually beats out wisdom 🙂

Eventually Heath makes the argument that not only do we often focus on the wrong end of the problem, but that we think about the problem too simplistically. At that point in his argument, he introduces the necessity of systems thinking because, especially upstream, you may find a set of interrelated factors and not a simple one-way stream.

[To be continued in the next post.]

© 2020 Norman Jacknis, All Rights Reserved

Are Computers Learning On Their Own?

To many people, the current applications of artificial intelligence, like your car being able to detect where a lane ends, seem magical. Although most of the significant advances in AI have been in supervised learning, it is the idea that the computer is making sense of the world on its own — unsupervised — which intrigues people more.

If you’ve read some about artificial intelligence, you may often see a distinction between supervised and unsupervised learning by machine. (There are other categories too, but these are the big two.)

In supervised learning, the machine is taught by humans what is right or wrong — for example, who did or did not default on a loan — and it eventually figures out what characteristics would best predict a default.

Another example is asking the computer to identify whether a picture shows a dog or a cat. In supervised learning, a person identifies each picture and then the computer figures out the best way to distinguish between each — perhaps whether the animal has floppy ears 😉

Even though the machine gets quite good at correctly doing this, the underlying model of the things that predict these results is also often opaque. Indeed, one of the hot issues in analytics and machine learning these days is how humans can uncover and almost “reverse engineer” the model the machine is using.

In unsupervised learning, the computer has to figure out for itself how to divide a group of pictures or events or whatever into various categories. Then the next step is for the human to figure out what those categories mean. Since it is subject to interpretation, there is no truly accurate and useful way to describe the categories, although people try. That’s how we get psychographic categories in marketing or equivalent labels, like “soccer moms”.

Sometimes the results are easy for humans to figure out, but not exactly earth shattering, like in this cartoon.

https://twitter.com/athena_schools/status/1063013435779223553

In the case of the computer that is given a set of pictures of cats and dogs to determine what might be the distinguishing characteristics, we (people) would hope that computer would figure out that there are dogs and cats. But it might instead classify them based on size — small animals and big animals — or based on the colors of the animals.

This is all sounds like it is unsupervised. Anything useful that the computer determines is thus part of the magic.

How Unsupervised Is Unsupervised Machine Learning?

Except, in some of the techniques of unsupervised learning, especially in cluster analysis, a person is asked to determine how many clusters or groups there might be. This too limits and supervises the learning by the machine. (Think about how much easier it is to be right in Who Wants To Be A Millionaire if the contestant can narrow down the choices to two.)

Even more important, the computer can only learn from the data that it is given. It would have problems if pictures of a bunch of elephants or firetrucks were later thrown into the mix. Thus, the human being is at least partially supervising the learning and certainly limiting it.  The machine’s model is subject to the limitations and biases of the data that it learned on.

Truly unsupervised learning would occur the way that it does for children. They are let out to observe the world and learn patterns, often without any direct assistance from anyone else. Even with over-scheduling by helicopter parents, children can often freely roam the earth and discover new data and experiences.

Similarly, to have true unsupervised learning of machines, they would have to be able to travel and process the data they see.

At the beginning of his book Life 3.0, Max Tegmark weaves a sci fi tale about a team that built an AI called Prometheus. While it wasn’t directly focused on unsupervised classification, Prometheus was unsupervised and learned on its own. It eventually learned enough to dominate all mankind. But even in this fantasy world, its unsupervised escape only enabled the AI machine to roam the internet, which is not quite the same thing as real life after all.

It is likely, for a while longer, that a significant portion of human behavior will occur outside of the internet 🙂

(And, as we saw with Microsoft’s chatbot Tay, an AI can also learn some unfortunate and incorrect things on the open internet.)

While not quite letting robots roam free in the real world, researchers at Stanford University’s Vision and Learning Lab “have developed iGibson, a realistic, large-scale, and interactive virtual environment within which a robot model can explore, navigate, and perform tasks.” (More about this at A Simulated Playground for Robots)

https://time.com/3983475/hitchbot-assault-video-footage/

There was HitchBOT a few years ago which traveled around the US, although I don’t think that it added to its knowledge along the way, and it eventually met up with some nasty humans. (For more see here and here.)

 

Perhaps self-driving cars or walking robots will eventually be able to see the world freely as we do. Ford Motor Company’s proposed delivery robot roams around, but it is not really equipped for learning. The traveling, learning machine will likely require a lot more computing power and time than we currently use in machine learning.

Of course, there is also work on the computing part of problem, as this July 21st headline shows, “Machines Can Learn Unsupervised ‘At Speed Of Light’ After AI Breakthrough, Scientists Say.” But that’s only the computing part of the problem and not the roaming around the world part.

These more recent projects are evidence that the AI researchers realize their models are not being built in a truly unsupervised way. Despite the hoped-for progress of these projects, for now, that is why data scientists need to be careful how they train and supervise a machine even in unsupervised learning mode.

© 2020 Norman Jacknis, All Rights Reserved

Words Matter In Building Intelligent Communities

The Intelligent Community Forum (ICF) is an international group of city, town and regional leaders as well as scholars and other experts who are focused on quality of life for residents and intelligently responding to the challenges and opportunities provided by a world and an economy that is increasingly based on broadband and technology.

To quote from their website: “The Intelligent Community Forum is a global network of cities and regions with a think tank at its center.  Its mission is to help communities in the digital age find a new path to economic development and community growth – one that creates inclusive prosperity, tackles social challenges, and enriches quality of life.”

Since 1999, ICF has held an annual contest and announced an award to intelligent communities that go through an extensive investigation and comparison to see how well they are achieving these goals.  Of hundreds of applications, some are selected for an initial, more in-depth assessment and become semi-finalists in a group called the Smart21.

Then the Smart21 are culled to a smaller list of the Top7 most intelligent communities in the world each year.  There are rigorous quantitative evaluations conducted by an outside consultancy, field trips, a review by an independent panel of leading experts/academic researchers and a vote by a larger group of experts.

An especially important part of the selection of the Top7 from the Smart21 is an independent panel’s assessment of the projects and initiatives that justify a community’s claim to being intelligent.

It may not always be clear to communities what separates these seven most intelligent communities from the rest.  After all, these descriptions are just words.  We understand that words matter in political campaigns.  But words matter outside of politics in initiatives, big and small, that are part of governing.

Could the words that leaders use be part of what separates successful intelligent initiatives from those of others who are less successful in building intelligent communities?

In an attempt to answer that question, I obtained and analyzed the applications submitted over the last ten years.  Then, using the methods of analytics and machine learning that I teach at Columbia University, I sought to determine if there was a difference in how the leaders of the Top7 described what they were doing in comparison with those who did not make the cut.

Although at a superficial level, the descriptions seem somewhat similar, it turns out that the leaders of more successful intelligent community initiatives did, indeed, describe those initiatives differently from the leaders of less successful initiatives.

The first significant difference was that the descriptions of the Top7 had more to say about their initiatives, since apparently they had more accomplishments to discuss.  Their descriptions had less talk about future plans and more about past successes.

In describing the results of their initiatives so far, they used numbers more often, providing greater evidence of those results.  Even though they were discussing technology-based or otherwise sometimes complex projects, they used more informal, less dense and less bureaucratic language.

Among the topics they emphasized, engagement and leadership as well as the technology infrastructure primarily stood out.  Less important, but also a differentiation, the more successful leaders emphasized the smart city, innovation and economic growth benefits.

For those leaders who wish to know what will gain them recognition for real successes in transforming their jurisdictions into intelligent communities, the results would indicate these simple rules:

  • Have and highlight a solid technology infrastructure.
  • True success, however, comes from extensive civic engagement and frequently mentioning that engagement and the role of civic leadership in moving the community forward.
  • Less bureaucratic formality and more stress on results (quantitative measures of outcomes) in their public statements is also associated with greater success in these initiatives.

On the other hand, a laundry list of projects that are not tied to civic engagement and necessary technology, particularly if those projects have no real track record, is not the path to outstanding success – even if they check off the six wide-ranging factors that the ICF expects of intelligent communities.

While words do matter, it is also true that other factors can impact the success or failure of major public initiatives.  However, these too can be added into the models of success or failure, along with the results of the textual analytics.

Overall, the results of this analysis can help public officials understand a little better how they need to think about what they are doing and then properly describe it to their citizens and others outside of their community.  This will help them to be more successful, most importantly for their communities and, if they wish, as well in the ICF awards process.

© 2020 Norman Jacknis, All Rights Reserved

Trump And Cuomo COVID-19 Press Conferences

Like many other people who have been watching the COVID-19 press conferences held by Trump and Cuomo, I came away with a very different feeling from each.  Beyond the obvious policy and partisan differences, I felt there is something more going on.

Coincidentally, I’ve been doing some research on text analytics/natural language processing on a different topic.  So, I decided to use these same research tools on the transcripts of their press conferences from April 9 through April 16, 2020.  (Thank you to the folks at Rev.com for making available these transcripts.)

One of the best approaches is known by its initials, LIWC, and was created some time ago by Pennebaker and colleagues to assess especially the psycho-social dimensions of texts.   It’s worth noting that this assessment is based purely on the text – their words – and doesn’t include non-verbal communications, like body language.

While there were some unsurprising results to people familiar with both Trump and Cuomo, there are also some interesting nuances in the words they used.

Here are the most significant contrasts:

  • The most dramatic distinction between the two had to do with emotional tone. Trump’s words had almost twice the emotional content of Cuomo’s, including words like “nice”, although maybe the use of that word maybe should not be taken at face value.
  • Trump also spoke of rewards/benefits and money about 50% more often than Cuomo.
  • Trump emphasized allies and friends about twenty percent more often than Cuomo.
  • Cuomo used words that evoked health, anxiety/pain, home and family two to three times more often than Trump.
  • Cuomo asked more than twice as many questions, although some of these could be sort of rhetorical – like “what do you think?”
  • However, Trump was 50% more tentative in his declarations than Cuomo, whereas Cuomo had greater expressions of certainty than Trump.
  • While both men spoke about the present tense much more than the future, Cuomo’s use of the present was greater than Trump’s. On the other hand, Trump’s use of the future tense and the past tense was greater than Cuomo’s.
  • Trump used “we” a little more often than Cuomo and much more than he used “you”. Cuomo used “you” between two and three times more often than Trump.  Trump’s use of “they” even surpassed his use of you.

Distinctions of this kind are never crystal clear, even with sophisticated text analytics and machine learning algorithms.  The ambiguity of human speech is not just a problem for machines, but also for people communicating with each other.

But these comparisons from text analytics do provide some semantic evidence for the comments by non-partisan observers that Cuomo seems more in command.  This may be because the features of his talks would seem to better fit the movie portrayal and the average American’s idea of leadership in a crisis – calm, compassionate, focused on the task at hand.

© 2020 Norman Jacknis, All Rights Reserved

Robots Just Want To Have Fun!

There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.

In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.

Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.

Robots are also showing off their fun and increasingly creative side.

Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.

Dancing

Last August, there was a massive robot dance in Guangzhou, China. It achieved a Guinness World Record for for the “most robots dancing simultaneously”. See https://www.youtube.com/watch?v=ouZb_Yb6HPg or http://money.cnn.com/video/technology/future/2017/08/22/dancing-robots-world-record-china.cnnmoney/index.html

Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc

Music

Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.

The Flow Machines project in Europe has also been using AI to create music in the style of more famous composers. See, for instance, its DeepBach, “a deep learning tool for automatic generation of chorales in Bach’s style”. https://www.youtube.com/watch?time_continue=2&v=QiBM7-5hA6o

Singing

Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s

Also, last August, there was a song composed by AI, although sung by a human – https://www.youtube.com/watch?v=XUs6CznN8pw&feature=youtu.be

There is even AI that will generate poetry – um, song lyrics.

Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:

Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.

Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.

Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.

An altered state of manic episodes,
A journey through the long and winding roads.

Not exactly upbeat, but you could well imagine this being a song too.

Finally, there is even the HRP-4C (Miim), which has been under development in Japan for years. Here’s her act –  https://www.youtube.com/watch?v=QCuh1pPMvM4#t=3m25s

All singing, all dancing, indeed!

© 2018 Norman Jacknis, All Rights Reserved

When Strategic Thinking Needs A Refresh

This year I created a new, week-long, all-day course at Columbia University on Strategy and Analytics. The course focuses on how to think about strategy both for the organization as a whole as well as the analytics team. It also shows the ways that analytics can help determine the best strategy and assess how well that strategy is succeeding.

In designing the course, it was apparent that much of the established literature in strategy is based on ideas developed decades ago. Michael Porter, for example, is still the source of much thinking and teaching about strategy and competition.

Perhaps a dollop of Christensen’s disruptive innovation might be added into the mix, although that idea is not any longer new. Worse, the concept has become so popularly diluted that too often every change is mistakenly treated as disruptive.

Even the somewhat alternative perspective described in the book “Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant” is now more than ten years old.

Of the well-established business “gurus”, perhaps only Gary Hamel has adjusted his perspective in this century – see, for example, this presentation.

But the world has changed. Certainly, the growth of huge Internet-based companies has highlighted strategies that do not necessarily come out of the older ideas.

So, who are the new strategists worthy of inclusion in a graduate course in 2018?

The students were exposed to the work of fellow faculty at Columbia University, especially Leonard Sherman’s “If You’re in a Dogfight, Become a Cat! – Strategies for Long-Term Growth” and Rita Gunther McGrath’s “The End Of Competitive Advantage: How To Keep Your Strategy Moving As Fast As Your Business”.

But in this post, the emphasis in on strategic lessons drawn from this century’s business experience with the Internet, including multi-sided platforms and digital content traps. For that there is “Matchmakers – the new economics of multisided platforms” by David S Evans and Richard Schmalensee. And also Bharat Anand’s “The Content Trap: A Strategist’s Guide to Digital Change”.

For Porter and other earlier thinkers, the focus was mostly on the other players that they were competing against (or decided not to compete against). For Anand, the role of the customer and the network of customers becomes more central in determining strategy. For Evans and Schmalensee, getting a network of customers to succeed is not simple and requires a different kind of strategic framework than industrial competition.

Why emphasize these two books? It might seem that these books only focus on digital businesses, not the traditional manufacturers, retailers and service companies that previous strategists worked at.

But many now argue that all businesses are digital, just to varying degrees. For the last few year we’ve seen the repeated headline that “every business is now a digital business” (or some minor variation) from Forbes, Accenture, the Wharton School of the University of Pennsylvania, among others you may not have heard of. And about a year ago, we read that “Ford abruptly replaces CEO to target digital transformation”.

Consider then the case of GE, one of the USA’s great industrial giants, which offers a good illustration of the situation facing many companies. A couple of years ago, it expressed its desire to “Become a Digital Industrial Company”. Last week, Steve Lohr of the New York Times reported that “G.E. Makes a Sharp ‘Pivot’ on Digital” because of its difficulty making the transition to digital and especially making the transition a marketing success.

At least in part, the company’s lack of success could be blamed on its failure to fully embrace the intellectual shift from older strategic frameworks to the more digital 21st century strategy that thinkers like Anand, Evans and Schmalensee describe.

© 2018 Norman Jacknis, All Rights Reserved

Too Many Unhelpful Search Results

This is a brief follow up to my last post about how librarians and artificial intelligence experts can
get us all beyond mere curation and our frustrations using web search

.

In their day-to-day Google searches many people end up frustrated. But they assume that the problem is their own lack of expertise in framing the search request.

In these days of advancing natural language algorithms that isn’t a very good explanation for users or a good excuse for Google.

We all have our own favorite examples, but here’s mine because it directly speaks to lost opportunities to use the Internet as a tool of economic development.

Imagine an Internet marketing expert who has an appointment with a local chemical engineering firm to make a pitch for her services and help them grow their business. Wanting to be prepared, she goes to Google with a simple search request: “marketing for chemical engineering firms”. Pretty simple, right?

Here’s what she’ll get:

She’s unlikely to live long enough to read all 43,100,000+ hits, never mind reading them before her meeting. And, aside from an ad on the right from a possible competitor, there’s not much in the list of non-advertising links that will help her understand the marketing issues facing a potential client.

This is not how the sum of all human knowledge – i.e., the Internet – is supposed to work. But it’s all too common.

This is the reason why, in a knowledge economy, I place such a great emphasis on deep organization, accessibility and relevance of information.

© 2017 Norman Jacknis, All Rights Reserved

Gold Mining

[Published 6/18/2011 and originally posted for government leaders, July 6, 2009]

My last posting was about the “goldmine” that exists in the information your government collects every day. It’s a goldmine because this data can be analyzed to determine how to save money by learning what policies and programs work best. Some governments have the internal skills to do this kind of sophisticated analysis or they can contract for those skills. But no government – not even the US Federal government – has the resources to analyze all the data they have.

What can you do about that? Maybe there’s an answer in a story about real gold mining from the authors of the book “Wikinomics”[1]:

A few years back, Toronto-based gold mining company Goldcorp was in trouble. Besieged by strikes, lingering debts, and an exceedingly high cost of production, the company had terminated mining operations…. [M]ost analysts assumed that the company’s fifty-year old mine in Red Lake, Ontario, was dying. Without evidence of substantial new gold deposits, Goldcorp was likely to fold. Chief Executive Officer Rob McEwen needed a miracle.

Frustrated that his in-house geologists couldn’t reliably estimate the value and location of the gold on his property … [he] published his geological data on the Web for all to see and challenged the world to do the prospecting. The “Goldcorp Challenge” made a total of $575,000 in prize money available to participants who submitted the best methods and estimates. Every scrap of information (some 400 megabytes worth) about the 55,000 acre property was revealed on Goldcorp’s Web site.

News of the contest spread quickly around the Internet and more than 1,000 virtual prospectors from 50 countries got busy crunching the data. Within weeks, submissions from around the world were flooding into Goldcorp headquarters. There were entries from graduate students, management consultants, mathematicians, military officers, and a virtual army of geologists. “We had applied math, advanced physics, intelligent systems, computer graphics, and organic solutions to inorganic problems. There were capabilities I had never seen before in the industry,” says McEwen. “When I saw the computer graphics, I almost fell out of my chair.”

The contestants identified 110 targets on the Red Lake property, more than 80% of which yielded substantial quantities of gold. In fact, since the challenge was initiated, an astounding 8 million ounces of gold have been found – worth well over $3 billion. Not a bad return on a half million dollar investment.

You probably won’t be able to offer a prize to analysts, although you might offer to share some of the savings that result from doing things better. But, since the public has an interest in seeing its government work better, unlike a private corporation, maybe you don’t have to offer a prize.And there are many examples on the Internet where people are willing to help out without any obvious monetary reward.

Certainly not everyone, but enough people might be interested in the data to take a shot of making sense of it – students or even college professors looking for research projects, retired statisticians, the kinds of folks who live to analyze baseball statistics, and anyone who might find this a challenge.

The Obama administration and its new IT leaders have made a big deal about putting its data on the Web. There are dozens of data sets on the Federal site data.gov[2], obviously taking care to deal with issues of individual privacy and national security. Although their primary interest is in transparency of government, now that the data is there, we’ll start to see what people out there learn from all that information. Alabama[3] and the District of Columbia, among others, have started to do the same thing.

You can benefit a lot more, if you too make your government’s data available on the web for analysis. Then your data, perhaps combined with the Federal data and other sources on the web, can provide you with an even better picture of how to improve your government – better than just using your own data alone.

  1. “Innovation in the Age of Mass Collaboration”, Business Week, Feb. 1, 2007 http://www.businessweek.com/innovate/content/feb2007/id20070201_774736.htm
  2. “Data.gov open for business”, Government Computer News, May 21, 2009, http://gcn.com/articles/2009/05/21/federal-data-website-goes-live.aspx
  3. “Alabama at your fingertips”, Government Computer News, April 20, 2009, http://gcn.com/articles/2009/04/20/arms-provides-data-maps-to-agencies.aspx

© 2011 Norman Jacknis