In the Industrial Age, the fight between labor and the owners of industry (“capital”) was the overarching political issue. As we move away from an industrial economy to one based on knowledge that debate is likely to diminish.
Instead, among the big battles to be fought in this century, will be about intellectual property — who controls it, who gets paid for it, how much they get paid, who owns it and whether ideas can properly be considered property in the same way we consider land to be property.
I’ve written about this before, but a recent story about the settlement of a suit by Star Trek was settled recently, as reported in the NY Times, brought this to mind, especially as I came across an interesting series of posts that provide some new perspectives.
These were written at the end of last year and the beginning of this year by the former chair of the Australian Film Critics Association, Rich Haridy.
His aim was to “examine how 21st century digital technology has given artists a set of tools that have dismantled traditional definitions of originality and is challenging the notions of copyright that came to dominate much of the 20th century.”
Here’s a quick, broad-brush summary of his argument for a more modern and fairer copyright system:
Not just in today’s digital world of remixes, but going back to Shakespeare and Bach and even before that, creative works have always been derivative from previous works. They clearly have originality, but no work is even close to being 100% original.
The tightening of copyright laws has undermined the original goal of copyrights — to encourage creativity and the spread of knowledge.
This reflects the failure of policy makers and the courts to understand the nature of creativity. This is getting worse in our digital world.
While the creators and distributors deserve compensation for their works, this shouldn’t be used as a reason to punish other artists who build and transform those works.
The enforcement is unequal. While bloggers and artists with limited financial means are easy targets for IP lawyers, the current system “while [theoretically] allowing for fair use, still privileges the rich and powerful, be they distributors or artists.”
It’s worth reading the series to understand his argument, which makes a lot of sense:
Haridy is not proposing destruction of copyrights. But if arguments, like his, are not heeded, don’t be surprised if more radical stances are taken by others — just as happened in the past in the conflict between labor and capital.
Even if we understand that what seems like resistance to change is more nuanced and complicated, many of us are directly or implicitly being asked to lead the changes in places of work. In that sense, we are “change agents” to use a well-established phrase.
Consider the number of times each day, both on the job and outside, that we hear the word “change” and the necessity for leaders to help their organizations change in the face of all sorts of challenges.
There has been a slew of popular business books providing guidance to would-be change agents. Several consultants and business gurus have developed their own model of the change process, usually outlining some necessary progression and steps that they have observed will lead to success.
Curiously, the same few anecdotes seem to pop up in a number of these, like burning platforms or the boardroom display of gloves.
While these authors mean well and have tried to be good reporters of what they have observed, change agents often find that, in practice, the suggestions in these books and articles are at best a starting point and don’t quite match the situation they face.
Part of the problem is that there has been too little rigorous behavioral work about how and why people change. (In fairness, some authors, like the Heath brothers, at least try to apply behavioral concepts in their recommendations on how to lead change.)
And on a practical level, many change agents find it difficult to figure out the tactics they need to use to improve the chances that the desired change will occur. In this post, I’m suggesting that we first need to understand the unique and sometimes unexpected ways that the human brain processes information and thus how we need to communicate.
(These are often called cognitive biases, but that is a pejorative phrase that might put you in the wrong mindset. It’s not a good idea starting an effort to convince people to join you in changing an organization by assuming that they are somehow irrational.)
As just one example, some of the most interesting work along these lines was that done by the Nobel-prize winning psychologist Daniel Kahneman and his colleague Amos Tversky.
They found in their research that people exaggerate potential losses beyond reality – often times incorrectly guessing that what they control (like driving a car) is less risky than what they don’t control (being a passenger in an airplane).
Moreover, a person’s sense of loss is greater if what might be lost has been owned or used for a long time (aka entitlements). Regret and other emotions can also enhance this sense of loss.
The estimate of losses and gains is also affected by a person’s reference point, which can be shifted by even random effects. The classic example of the impact of a reference point is how people react differently to being told either that they have a 10% chance of dying or a 90% chance of living through a major disease. The probabilities are the same, of course.
In general, they found that there is an aversion to losses which outweighs possible gains, even if the gains might be worth more.
This makes it sound like change is very difficult, since many people often perceive proposed changes as having big risks.
But there is more to the story. Indeed, Kahneman found that there is no across-the-board aversion to change or even merely to risk. Indeed people might make a more risky choice when all options are bad.
“When faced with a risky prospect, people will be: (1) risk-seeking over low-probability gains, (2) risk-averse over high-probability gains, (3) risk-averse over low-probability losses, and (4) risk-seeking over high-probability losses.”
In just this brief summary, there is some obvious guidance for change agents:
Reduce people’s estimate of their potential loss. For example, the new system won’t cost 25% more than the old one, but it will just be an extra nickel each time it is used.
Increase the perceived value of the change and/or the perceived likelihood of success – positive vivid images help to overcome lower probability estimates of the chances of success; negative vivid images help to magnify the probability of loss.
Help people redefine the perception of loss by shifting their frame of reference, which determines their starting point.
Reduce the overall size of the risks, which means it is best to introduce small innovations, piled on each other. Behavioral scientists have also observed the irrational fear of loss versus the possibility of benefit is reduced when a person has had experience with the trade-off. A series of small innovations will help people to gain that experience and you will also find out which of your great ideas really are good. Since any innovation is an experiment, there’s no guarantee of success. Some will fail, but if the ideas are good and competent people are implementing the changes, you’ll succeed sufficiently more often than you fail so that the overall impact is positive.
Work to convince people that their certainty of loss is only a possibility. People react differently to being told something is a sure thing, than a 90% probability.
Since risk taking is no longer avoided among bad choices, show that the obvious loss of change is less than a bigger possible loss of not changing.
I’ve just touched the surface here. There other findings of behavioral and social science research that can also enable change agents to get a firmer grasp on the reality of the situation facing them and suggest things they might do to become more successful.
As I’ve been going through articles and books for the course on Analytics and Leading Change that I’ll be teaching soon at Columbia University, I frequently read how leaders and other change agents need to overcome resistance to change. Whenever we aim to get things done and they don’t happen immediately, this is often the first explanation for the difficulty.
Resistance to change is a frequent complaint of anyone introducing a new technology or especially something as fundamental as the use of analytics in an organization.
The conflict that it implies can be compelling. You could make a best seller or popular movie out of that conflict, like that great story about baseball, analytics and change “Moneyball”.
This is an idea that goes very far back. Even Machiavelli, describing Renaissance politics, is often quoted on the subject:
“There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. For the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries … and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it.”
It’s all awful if you’re the one trying to introduce the change and many have written about the problems they saw.
But is that word “resistance” misleading change agents? Going beyond the perspectives and anecdotes of change agents and business consultants, there has been over the last two decades some solid academic research on this subject. And, as often happens when we learn more, there have been some important subtleties lost in that phrase “resistance to change”.
“People do not resist change, per se. People may resist loss of status, loss of pay, or loss of comfort, but these are not the same as resisting change … Employees may resist the unknown, being dictated to, or management ideas that do not seem feasible from the employees’ standpoint. However, in our research, we have found few or no instances of employees resisting change … The belief that people do resist change causes all kinds of unproductive actions within organizations.”
Is what looks like resistance something more or something else?
More recently, University of Montreal Professor Céline Bareil wrote about the “Two Paradigms about Resistance to Change” in which she compared “the enemy of change” (traditional paradigm) to “a resource” (modern paradigm). She noted that:
“Instead of being interpreted as a threat, and the enemy of change, resistance to change can also be considered as a resource, and even a type of commitment on the part of change recipients.”
Making this shift in perspective is likely harder for change agents than the changes they expect of others. The three authors of “Resistance to Change: The Rest of the Story” describe the various ways that change agents themselves have biased perceptions. They say that blaming difficulties on resistance to change may be a self-serving and “potentially self-fulfilling label, given by change agents attempting to make sense of change recipients’ reactions to change initiatives, rather than a literal description of an objective reality.”
Indeed, they observe that the actions of change agents may not be merely unsuccessful, but counter-productive.
“Change agents may contribute to the occurrence of the very reactions they label as resistance through their own actions and inactions, such as communications breakdowns, the breach of agreements and failure to restore trust” as well as not listening to what is being said and learning from it.
There is, of course, a lot more to this story, which you can start to get into by looking at some of the links in this post. But hopefully this post has offered enough to encourage those of us who are leading change to take a step back, look at the situation differently and thus be able to succeed.
Last month, I wrote about Head Tech – technology that can be worn on the head and used to control the world about us. Most of those products act as an interface between our brain waves and devices that are managed by computers that are reading our brain waves.
The other related area of Head Tech recognizes the major role of our eyes literally as windows to the world we inhabit.
Google may have officially sidelined its Glass product, but its uses and products like it continue to be developed by a number of companies wanting to demonstrate the potential of the idea in a better way than Google did. There are dozens of examples, but, to start, consider these three.
Carl Zeiss’s Smart Optics subsidiary accomplished the difficult technical task of embedding the display in what looks to everyone like a pair of regular curved eyeglasses. Oh, and they could even be glasses that provide vision correction. Zeiss is continuing to perfect the display while trying to figure out the business challenge of bringing this to market.
Also offering something that looks like regular glasses and is not a fashion no-no is LaForge Optical’s Shima, which has an embedded chip so it can display information from any app on your smartphone. It’s in pre-order now for shipment next year, but you can see what they’re offering in this video. A more popular video provides their take on the history of eye glasses.
While Epson is not striving to devise something fashionable, it is making its augmented reality glasses much lighter. This video shows the new Moverio BT-300 which is scheduled to be released in a few months.
Epson is also tying these glasses to a variety of interesting, mostly non-consumer, applications. Last week at the Interdrone Conference, they announced a partnership with one of the leading drone companies, DJI, to better integrate the visuals coming from the unmanned aerial camera with the glasses.
DAQRI is bringing to market next month an updated version of its Smart Helmet for more dangerous industrial environments, like field engineering. Because it is so much more than glasses, they can add all sorts of features, like thermal imaging. It is a high end, specialized device, and has a price to match.
At a fraction of that price, Metavision has developed and will release “soon” its second generation augmented reality headset, the Meta 2. Its CEO’s TED talk will give you a good a sense of Metavision’s ambitions with this product.
Without a headset, Augmenta has added recognition of gestures to the capabilities of glasses from companies like Epson. For example, you can press on an imaginary dial pad, as this little video demonstrates.
This reminds me a bit of the use of eye tracking from Tobii that I’ve included in presentations for the last couple of years. While Tobii also sells a set of glasses, their emphasis is on tracking where your eyes focus to determine your choices.
One of the nice things about Tobii’s work is that it is not limited to glasses. For example, their EyeX works with laptops as can be seen in this video. This is a natural extension of a gamer’s world.
Which gets us to a good question: even if they’re less geeky looking than Google’s product, why do we need to wear glasses at all? Among other companies and researchers, Sony has an answer for that – smart, technology-embedded contact lenses. But Sony also wants the contact lens to enable you to take photos and videos without any other equipment, as they hope to do with a new patent.
So we have HeadTech and EyeTech (not to mention the much longer established EarTech) and who knows what’s next!
Some of my blog posts seem to be ahead of news reported elsewhere,
which is ok with me, but also means that it might be helpful to list
some interesting articles that continue past stories. Here are some
My two-part series in March on the Coding Craze
questioned the long term value of the plan by many public officials to
teach computer coding. While the general news media continue to talk and
write about coding as an elixir for your career, WIRED Magazine
recently ran a cover story titled “The End of Code”. See their web
piece at http://www.wired.com/2016/05/the-end-of-code/
written several posts on one of my special interests – the related
subjects of mixed reality, virtual reality, blended physical and digital
spaces. I noted sports as a natural for this, including highlighting the Trilite project last year. So it was great to read
the announcement in the last few days that NBC and Samsung are
collaborating to offer some of the Rio Olympics on Samsung VR gear.
all inundated with talk about how “things are changing faster than ever
before” in our 21st century world. Taking an unconventional view, in
2011, I asked “Telegraph vs. Internet: Which Had Greater Impact?”
My argument was that the first half of the 19th century had much more
dramatic changes, especially in speeding up communications. In what I
think is the first attempt to question the fastest-ever-changes meme,
the New York Times Magazine also recently elaborated on this theme in an
Upshot article titled “What Was the Greatest Era for Innovation? A Brief Guided Tour”.
In “Art and the Imitation Game”,
March 2015, I wrote about how artificial intelligence is stepping into
creative activities, like writing and painting. While there have been
many articles on this subject since, one of the most intriguing was from
the newspaper in the city with more attorneys per capita than anywhere
else, as the Washington Post invited us to “Meet ‘Ross,’ the newly hired legal robot”.
I wrote about the White House Rural Telehealth meeting in April this year. The New York Times later had a report on the rollout of telehealth to the tens of millions of customers of Anthem, under the American Well label.
back several years and in both that post and one on “The
Decentralization Of Health Care” about a year and a half ago, I’ve
touched on the difficulties posed by the fee for service health care
system in the US and instead wondered if we would be better off by
paying health systems a yearly fee to keep us healthy – thus aligning
our personal interests with those of the system. So it has been
interesting to see in April that there was movement on this by the
Center for Medicare and Medicaid Services (CMMS), which is the Federal
government’s health insurance agency. Here are just some examples:
At the annual summit of the Intelligent Community Forum two weeks
ago, there was a keynote panel consisting of the mayors of three of the
most intelligent cities in the world:
Michael Coleman, Mayor of the City of Columbus, Ohio from 2000 through 2015
Mayor Rob Van Gijzel, Eindhoven, Netherlands, from 2008-today
Paul Pisasale, Mayor, City of Ipswich, Queensland, Australia, from 2004-today
Both Eindhoven and Columbus have been selected as the most intelligent community in the world and Ipswich has been in the Top 7. Columbus also was just selected by the US Government as one of the winners of its Smart City challenge.
topic was intriguing (at least to those of us who care about economic
growth): “International Economic & Business Development — Secrets of
international development at the city and region level”.
did have interesting things to say about that topic. Mayor Coleman
pointed out that 3,000 jobs are created for every billion dollars of
global trade that Columbus has. He reminded the audience that making
global connections for the benefit of the local economy is not a
one-time thing as it takes years to build relationships that will
flourish into deep global economic growth.
That reminder of the
long term nature of creating economic growth was a signal of the real
secrets they discussed — how to survive a long time in elected office
and create a flourishing city.
Part of what distinguishes these
mayors from others is not just their success at being elected because
the voters thought they were doing a good job. An important part of
their success is their willingness to focus on the long-term, the
By contrast, those mayors and other local officials who
are so worried about re-election instead focus just on short term hits
and, despite that, often end up being defeated.
This requires a
certain personal and professional discipline not to become too easily
distracted by daily events. For example, Mayor Coleman said he divided
his time into thirds –
Handling the crisis of the day (yes, he did have to deal with that, just not all the time)
Keeping the city operations going smoothly
Developing and implementing a vision for the future
another statement of the importance of a future orientation, Mayor
Pisasale declared that “economic development is about jobs for your
kids” — a driving motivation that’s quite different from the standard
economic development projects that are mostly sites for ribbon cuttings
and a photo in the newspaper.
He was serious about this statement
even in his political strategy. His target groups for the future of the
city are not the usual civic leaders. Rather he reaches out to
students (and taxi drivers) to be champions for his vision of the
Mayor Van Gijzel pointed out that an orientation to the
future means that you also have to be willing to accept some failures –
something else that you don’t hear often from more risk-averse, but less
successful politicians. (By the way, there’s a lot more detail about
this in the book, “The City That Creates The Future: Rob van Gijzel’s
This kind of thinking recalls the 1932 declaration by the most
politically successful and re-elected US President, Franklin Roosevelt:
country needs and, unless I mistake its temper, the country demands
bold, persistent experimentation. It is common sense to take a method
and try it: If it fails, admit it frankly and try another. But above
all, try something.”
That brings up another important point
in this time of focus on cities. Innovation and future-orientation is
not just about mayors.
Presidents aside, another example of long term
vision comes from
Buddy Villines, who was chief executive of Pulaski County (Little Rock, Arkansas) for twenty-two years until the end of 2014.
a time when many public officials are disdained by a majority of their
constituents, these long-time mayors – successful both as politicians
and for the people of their cities – should be a model for their more
At the end of the year, there are many top 10 lists of the best
movies, best books, etc. of the year. Here’s my list of the best
non-fiction books I’ve read this year. But it has only eight books and
some were published earlier than this year since, like the rest of you,
I’m always behind in my reading no matter how many books, articles, and
blogs I read.
Although some are better than others, none of these
books is perfect. What book is perfect? But they each provide the reader
with a new way of looking at the world, which in turn is, at a minimum,
thought provoking and, even better, helps us to be more innovative.
highlighted the major theme of each, but these are books that have many
layers and depth so my summary only touches on what they offer.
Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan
had a few scary books out this past year or so about how robots are
going to take our work from us and enslave us. Kaplan’s brilliant book,
published this year, is much more nuanced and sophisticated. It is not
just “ripped from today’s headlines”. Instead, Kaplan provides history
and deep context. Especially interesting is his discussion of the legal
and ethical issues that arise when we use more of these
Creating the Learning Society by Joseph Stiglitz & Bruce Greenwald
Stiglitz, the Nobel Prize winning economist, has been better known for
“The Great Divide: Unequal Societies and What We Can Do About Them”
which was published this year and is a sequel to his earlier book on the
subject, “The Price of Inequality” (2012). While those deal with the
important issue of economic inequality, at this point, that’s not news
to most of us.
Less well known, if more rigorous as a work of
economics, is his 2013 book “Creating the Learning Society”. With all
the talk about the importance of lifelong learning and innovation to
succeed in the economy of this century, there have been few in-depth
analyses of how that translates into economic growth and greater
incomes. Nor has there been much about what are the appropriate
government policies to have a modern economy to grow. Stiglitz provides
both in this book.
The End Of College: Creating the Future of Learning and the University of Everywhere by Kevin Carey
about lifelong learning, I found this book thought-provoking,
especially as a college trustee. Published this year, the rap on it is
that it’s all about massive open online courses (MOOCs), but it is
actually about much more than that. It provides a good history of the
roles that colleges have been asked to play and describes a variety of
ways that many people are trying to improve the education of students.
BiblioTech by John Palfrey
Palfrey was the patron of Harvard Law School’s Library Lab, one of the
nation’s leading intellectual property experts and now chairman of the Digital Public Library of America,
among other important positions. BiblioTech, which was published
earlier this year, describes a hopeful future for libraries – including a
national network of libraries. (Readers of this blog won’t be surprised
that Palfrey and I share many views, although he put these ideas all
together in a book and, of course, elaborated on them much more than I
do in these relatively short posts.)
Too Big To Know by David Weinberger
five years ago, I got to work a bit with David Weinberger when he was
one of the leaders of the library innovation lab at Harvard Law School,
in addition to his work at Harvard’s Berkman Center. When I was
introduced to the library lab’s ambitious projects, I joked with David
that his ultimate ambition was to do nothing less than organize all of
the world’s knowledge for the 21st century. This book, which was
published a year later is, I suppose, a kind of response to that
My reading of Weinberger’s big theme is that we can no
longer organize the world’s knowledge completely. The network itself has
the knowledge. As the subtitle says: now “the smartest person in the
room is the room” itself. Since not all parts of the network are
directly connected, there’s also knowledge yet to be realized.
Breakpoint by Jeff Stibel
the overheated subtitle this book, this book, published in 2013, is
somewhat related to Weinberger’s book in that it focuses on the network.
Using analogies from ant colonies and the neuron network of the human
mind, Stibel tries to explain the recent past and the future of the
Internet. As the title indicates, a key concept of the book is the
breakpoint – the point at which the extraordinary growth of networks
stops and its survival depends upon enrichment, rather than attempts at
continuing growth. As a brain scientists, he also argues that the
Internet, rather than any single artificially intelligent computer, is
really the digital equivalent of the human brain.
Previously I’ve devoted whole posts to two other significant books. Just follow the links below:
Jony Ive, who is credited with the design of many of Apple’s greatest products, was promoted to the position of Chief Design Officer last week. The company’s announcement seemed to say that Ive would now be bringing “design thinking” to all of the company, not just its products. Some pundits said this was a graceful way to ease him out of the picture. Others said it freed him to spend more time on the “next big thing.”
Maybe he should indeed be refocusing on product design, since for me, his promotion renewed the question as to whether Apple has lost its way in the design of tech products.
Apple can still create nice feature improvements in its products, but they seem to missing the larger aims of design. Specifically, think back to Apple’s showing of the iPad 3 in 2012. Its video introduction of that product began with these words:
“We believe technology is at its very best when it’s invisible. When you’re conscious only of what you’re doing, not the device you’re doing it with.”
This is as good as it gets in describing the role of design in technology products. Yet over the last couple of years, Apple’s products have gotten mostly bigger and more obvious.
The iPhone 6 grew bigger than the iPhone 5, mostly it would seem to catch up to the competition. The iPod Nano, a useful and small device, was discontinued and replaced by a larger version.
So now, instead of seeing someone holding an Apple product like this …
people go around absurdly armed like this.
The Apple Watch is another example where Apple did anything but hide the technology. This is all the more perplexing when you look at what they could have designed, something more like the Neptune Hub that is an attempt to create an elegant new product category.
Given its small size and dependency on crowdfunding, there’s every conventional reason to question how long Neptune can last.
Given its marketing power and reputation, there’s no reason to think that the Apple Watch will not be at least a conventional, moderate business success. But, whatever success the Watch has and will have cannot mostly be attributed to design – which Apple used to claim to be among its chief differentiators.
I’m not predicting the demise of Apple, which has been heralded ever since Steve Jobs’ death. It’s very hard to drive a company, with $150 billion in the bank, to extinction any time soon. And Apple’s products are not bad at all. (I suppose that’s faint praise 😉
It may seem churlish to criticize the largest company in the world, one that seems headed toward being the first with a trillion dollar stock market valuation. But money is not the measure of all things, as the old line goes and Jobs himself inferred.
What Apple perhaps is facing is a kind of typical corporate maturity – with solid products, but a greater emphasis on sales/marketing and management processes, rather than on design and the user’s needs. It was exactly that kind of shift that Steve Jobs criticized in Robert Cringely’s Lost Interview with him. Jobs’ target was IBM and HP. But no company is immune, as he knew.
The observations that Jobs made about what it takes for both small and big companies to make great leaps are even more relevant today than twenty years ago when the video was recorded. Those of us who consume new technology can only hope that somewhere out there is another Jobs who has learned his lessons– and who will ensure newly designed products that get closer to being invisible, after all.
My last post was about the fight over intellectual property. A few weeks before that I wrote about what a book is in a digital age and suggested that librarians could become the equivalent of DJs for books.
Pulling those two themes together, this post is about what some libraries are already doing that can shift the balance in book publishing.
But, first a bit of history. When public libraries were first established well over a hundred years ago, one of their primary responsibilities was purchase books on behalf of their community. Then the community members could share all these books, without having to buy separate copies.
Until the mid-20th Century, this worked in favor of publishers since libraries were, in general, their most reliable market for books. Libraries also helped build markets of readers that the publishers would sell to or that many people eventually bought the books they borrowed because they liked them so much. The library was a kind of try-and-buy location.
As the industry grew, selling direct to an ever more educated public in the latter half of the 20th Century, many book publishers started thinking that libraries reduced their sales, rather than enhancing them. But that was a battle the publishers had lost long ago and couldn’t do much about.
Moreover, it is a moot point in this century when e-books have overtaking traditional print book publishing. Even if that growth trend has slowed a bit recently, the battle between publishers and libraries has been renewed around e-books, not printed books.
The traditional publishers – the Big 5 – have taken an especially restrictive approach to e-books, perhaps in the hopes of turning away from the historical role that public libraries have played for printed books. Until less than two years ago, some publishers even refused to sell e-books to libraries. They still restrict the number of times an e-book could be lent or charge extraordinary prices for them.
This pattern continues despite some good arguments that publishers could benefit from a more supportive relationship with libraries, as laid out by the marketing expert, David Vinjamuri.
But any significant change, like e-books, can be a two-edged sword. They may be an opportunity for big publishers to change the rules. But they are also an opportunity for libraries.
Unlike printed books, there is effectively no limitation on how many e-books a library can store. And librarians have noticed that many of their patrons are writing e-books. Much of the spectacular growth in e-books has been among self-published authors. (Amazon even makes this easy with its Createspace service.)
With this background, there has developed a movement among libraries to become the publishing platform for authors or to, at least, partner with self-publishing services.
Although he lost by a little, one of the candidates in the election a few days ago for president of the American Library Association was Jamie LaRue, who has built his reputation in large part as a leader of the library publishing movement.
There are already several interesting examples across the country. The Los Gatos Public Library has joined with the Smashwords self-publishing company. The Provincetown, Massachusetts library – proudly “Ranked #1 in the US by Library Journal” – has created its own self-publishing agency, Provincetown Press.
The much larger Los Angeles Public Library is using the Self-e platform from Library Journal and BiblioBoard. The February 2015 issue of Library Journal quotes John Szabo, LAPL’s director and one of the most innovative national library leaders:
“We are and will continue to be a place for content creation… It’s a huge role for libraries. … I want to see our authors not just all over California but circulating from Pascagoula, MS, to Keokuk, IA.”
Too often, news of new library services does not get widely publicized and is only seen by those already patronizing libraries. So it was helpful that LAPL’s platform for local authors was reported a couple of weeks ago in a publication they might well read – LA Weekly.
With the Internet enabling easier collaboration and co-creation than ever before, as I’ve noted in this blog, we are also seeing examples of self-publishing that go beyond an individual author.
Topeka Community Novel Project describes its ideal: “A community novel is one that is collaboratively conceptualized, written, illustrated, narrated, edited and published by members of your community.”
Publishing by academic libraries and other non-traditional publishers is an increasing factor in research, as well. While it publishes papers that are peer-reviewed as in traditional journals, PLOS (Public Library of Science) is perhaps the best known adherent of “open access” publishing. Open Access means that there are no restrictions on the use of the articles, available online, free to read.
Academic journals and books have been very expensive and not all of that cost can be eliminated by this new approach. For example, the peer review process still has to be managed. However, the cost is much lower. PLOS charges authors a relatively minimal fee.
Overall, all of the initiatives that I’ve highlighted here are a part of a digital age trend in which we’ll see more librarians going beyond being mere collectors of big publishing companies’ books to being curators and creators of content.
The public policy battle that defined the 19th century pitted labor against capital. While that battle has not completely ended, the battle that may define this century is about “intellectual property” – who owns it, what others can do with it, and indeed whether ideas and innovations can or should be treated as property in the same way that land or a car is property.
To go back to original principles, copyrights (and patents) were made part of the US Constitution to primarily encourage innovation by granting monopoly control over an idea for a limited time. It wasn’t there primarily to protect the value of that monopoly. That was just the tool the authors of the Constitution used to provide incentives for innovation by any inventive genius then living in the hinterlands.
Of course, a lot has changed since then. We realize much better now that it is rare for a lone genius to come up with an idea or even a creative work, without having been influenced, perhaps even collaborating in a way, with many others. So the importance of that monopoly incentive for all the people who play a role in creating something new is not as clear today. Nor are the current owners of copyrights and patents necessarily the original creators.
The battle was started more than ten years ago by the development of the Internet. Then, in 2004, the famed intellectual property law professor, Lawrence Lessig, wrote “Free Culture”. He explained: “Free Cultures are cultures that leave a great deal open for others to build upon. Ours was a free culture. It is becoming less so.”
He argued that ever since the Constitution set out a limited term for copyrights, Congress has continually extended that term and added other limitations on the use of copyrighted material until now that there is almost no public domain left. His concern is that few works remain – and no new ones added – that are free to be used and help contribute to our common knowledge without restrictions.
As he wrote: “At the start of this book, I distinguished between commercial and noncommercial culture. In the course of this chapter, I have distinguished between copying a work and transforming it. We can now combine these two distinctions and draw a clear map of the changes that copyright law has undergone.”
He then proceeded to show that in 1790 only works that were published commercially were covered by copyrights. In this century “the law now regulates the full range of creativity — commercial or not, transformative or not—with the same rules designed to regulate commercial publishers.” Moreover, these copyrights extend for a much longer period of time.
Lessig worries that this imbalance and overuse of copyrights is diminishing the vibrancy of our culture and ultimately reducing innovation.
By the way, I opened this post with a reference to how your car is considered to be a very traditional kind of property. But nothing really escapes the digital age, so it’s fascinating to see the lawyers for John Deere and General Motors recently claiming that you don’t really own their vehicles completely. Yes, you own the mechanical parts, but they claim they retain ownership of the software that is now a critical component of those vehicles – so, no, you don’t really own that car after all. Yet another example of the extremism that Lessig criticizes.
This past year, the science-fiction author and activist, Cory Doctorow, wrote his update to the debate in “Information Doesn’t Want to Be Free: Laws for the Internet Age”.
But bringing some wisdom and historical perspective to the debate at hand, he adds a delightful chapter about the principle he calls “Every Pirate Wants to Be an Admiral”
It’s not as though this is the first time we’ve had to rethink what copyright is, what it should do, and whom it should serve…
When piano rolls were invented, the composers, whose income came from sheet music, were aghast. They couldn’t believe that player-piano companies had the audacity to record and sell performances of their work. They tried—unsuccessfully—to have such recordings classified as copyright violations.
Then (thanks in part to the institution of a compulsory license) the piano-roll pirates and their compatriots in the wax-cylinder business got legit, and became the record industry.
Then the radio came along, and broadcasters had the audacity to argue that they should be able to play records over the air. The record industry was furious, and tried (unsuccessfully) to block radio broadcasts without explicit permission from recording artists. Their argument was “When we used technology to appropriate and further commercialize the works of composers, that was progress. When these upstart broadcasters do it to our records, that’s piracy.”
A few decades later, with the dust settled around radio transmission, along came cable TV, which appropriated broadcasts sent over the air and retransmitted them over cables. The broadcasters argued (unsuccessfully) that this was a form of piracy, and that the law should put an immediate halt to it. Their argument? The familiar one: “When we did it, it was progress. When they do it to us, that’s piracy.”
Then came the VCR, which instigated a landmark lawsuit by the cable operators and the studios, a legal battle that was waged for eight years, finishing up in the 1984 Supreme Court “Betamax” ruling. You can look up the briefs if you’d like, but fundamentally, they went like this: “When we took the broadcasts without permission, that was progress. Now that someone’s recording our cable signals without permission, that’s piracy.”
Sony won, and fifteen years later it was one of the first companies to get in line to sue Internet companies that were making it easier to copy music and videos online…
The purpose of copyright shouldn’t be to ensure that whoever got lucky with last year’s business model gets to stay on top forever. Live music is great, but what a rotten thing it would have been if the winners of the live-music lottery in 1908 had been allowed to strangle recorded music to protect their turf.”
Doctorow noted that the public has found ways around the ever increasing copyright restrictions, albeit with questionable legality. At its conclusion, the book is intended to help the creative artists and innovators of our day – if not the corporate owners of intellectual property – understand how they might still make a living.
Chris Anderson also offered ideas along these lines in his 2009 book, “Free: The Future of a Radical Price”, but more in response to the fact that the cost of products, including creative products, has gone down dramatically. But that is also a factor in the intellectual property debate.
A couple of years ago, I participated in the annual innovation summit that the Wharton School of the University of Pennsylvania runs. I remember a generational divide in the room among the people who made a living in biotech and pharmaceuticals. The younger executives told the older ones that they could no longer expect the kind of monopoly returns on their patented drugs that they use to have because new ideas and inventions were so much easier to generate now.
And, getting back to basics, wasn’t the original point to encourage innovation? As befitting a battle of the century, there’s lots more to say about this controversy, but I’ll say that later. This post is long enough for an opening round.
When people talk about innovative places, they often refer to Silicon Valley or New York or some other urban megalopolis. By contrast, most of us have a sense that rural areas around the world face overwhelming problems. Some of us – hopefully the readers of this blog – also know there’s great future potential in those areas.
And that potential is being realized in a few corners of the world that might surprise you. Consider the countryside in the southern part of the Netherlands – the small city of Eersel and the other towns and farms nearby.
You may even have an image of the place from Vincent Van Gogh’s paintings of potato farmers 130 years ago. (He lived in the nearby town of Nuenen.)
It’s a different place today. Not different in the way much of the world has gone – with modern cities replacing what had been primitive countryside – but rather a modern countryside.
Taking me on a tour of this region two weeks ago was Mr. Kees Rovers, a long-time supporter of the Intelligent Community Forum (ICF), a noted telecommunications entrepreneur and speaker on the impact of the Internet. Years ago he was a leader in bringing a high speed fiber network to Nuenen. Now he’s working on bringing fiber networks to the nearby town of Eersel.
Perhaps partly, but not only, due to the presence of Philips research labs in the city of Eindhoven, Wikipedia has noted:
“The province of Noord-Brabant [which contains the areas I’m describing] is one of the most innovative regions of the European Union. This is shown by the extensive amount of new research patents by Eurostat.”
The support of innovators and pride about local innovation by the leaders of the community, like Eersel Mayor Anja Thijs-Rademakers, contributes to this local culture of innovation. The Mayor, along with Mr. Harrie Timmermans (City Manager/Alderman), and Mrs. Liesbeth Sjouw (Alderman), joined Mr. Rovers and myself in visits to three good examples of innovation in the countryside.
First, we saw the van der Aa family farm, which has invested in robotics – robots for milking the cows and robots to clear the barn of the manure the cows produce in great quantity. Think of a bigger, smarter, more necessary version of the Roomba, like the one in this picture.
Then we visited Vencomatic, which was created by a local entrepreneur but is now a global business, still based in the countryside. In addition to pioneering animal-friendly technology for the poultry industry, their headquarters won the award as “Europe’s most sustainable commercial building”.
The final stop was at Jacob Van Den Borne’s potato farm in Reusel. He described his use of four drones, numerous sensors deep in the ground, analytics and scientific experiments to increase quality and production on the land. You can see his two minute video in Dutch about precision agriculture, with English captions at https://www.youtube.com/watch?v=nlS8nVaI698
This is a picture of a potato farmer that Van Gogh could never have imagined.
Of course, what’s missing in this picture of innovation – and ultimately limits the growth of that innovation and its spirit – is broadband beyond the more densely populated villages. That’s why Rovers and the City of Eersel are deploying broadband away from the town center, using the motto “Close The Gap”. (Mr. Rovers is also the Founder/Director of the NGO of the same name.)
It’s also something that Van Den Borne knows, so he has organized a co-operative to build out broadband in the countryside that doesn’t have connectivity yet. Then he can take his innovations to a whole new level.
Whether it’s just an unusually strong regional culture of innovation or the historical necessity of being creative in rural areas where you can’t just pay someone down the block to solve your problems, this region of the world sets a good example for many other rural areas. That, in part, is what motivates us to continue ICF’s efforts to build a new connected countryside everywhere.
With all the work going on in libraries to digitize materials and also to manage materials that are born digital, someone asked what this means for the traditional role of librarians as collectors on behalf of their patrons.
Someone else pointed out that librarians are beyond being collectors of materials to being curators. OK, but what does curation mean now and going forward into the future?
In many ways, the deeper question is “what is a book” in the digital age?
To provide some context, imagine all the books that have been written are each envisioned as a highway. Reading the traditional printed book of the pre-digital era was like getting on a highway and not getting off until it ends – unless, of course, you just stopped reading it at all.
But when you’re reading a digital book, you might see something interesting from this highway – at any point – and get off for a look. You might return quickly or keep going further and further away from the highway (i.e., the original book you started reading). You might also want to follow a meandering path that someone else charted or “discovered” before you.
So we are practically past the age when the book as a body of written material was siloed between hard covers and stood in isolation from other books. The book is no longer a discrete and fixed product.
Some developments have already begun to recognize this change. For example, there are the “adaptive textbooks” from McGraw Hill Education and Harvard’s H20 Adaptable Digital Textbooks.
Indeed, over the last few years, the Harvard University Library Innovation Lab has been doing some of the most interesting work along these lines anywhere in the world. Its relevant projects and experiments include:
Consilience: which “provides an interactive user interface to help users discover different ways of grouping sets of documents, to zoom into each cluster within a selected group, and to zoom in further into individual documents”
Atlas Viewer: a way to explore geographic materials spatially, rather than one page at a time
StackLife: which “lets you browse all of the items in Harvard’s 73 libraries and Book Depository as if they were on a single shelf.”
In the new world of digital reading paths, how can the reader not get totally lost and confused? – Or perhaps get lost enough to discover new things for himself/herself. Better yet how can people be helped to discover new things and new connections that no one had discovered before, that we would all benefit from?
How many of the people who have traditionally been trying to help readers – librarians, reviewers, editors, writers and others – are prepared to deal with the fact this is even a question they have to answer?
Perhaps librarians and editors should start thinking of themselves as the equivalent of digital DJs who organize and often mash-up music from a variety of artists. These DJs become star entertainers themselves – way more than mere musical curators. (Librarians should also note that the most successful of these DJs earn millions of dollars. See Forbes’ review “The World’s Highest-Paid DJs”.)
With a new mindset about the role of a librarian and more of the kind of experimentation and technology Harvard has created and the rest of us will be better able to navigate the digital reading highways and byways.
I’ve written, as recently as a couple of weeks ago, about innovation in government. There are many examples, although many more are needed. Despite – or maybe because of – financial constraints and opposing interests who are ritually stuck in old debates, creativity rules in government as elsewhere.
But I was reminded by readers that public officials – or executives of corporations, for that matter – don’t always know how to create a culture of innovation. In response, I remembered a book published a bit less than a year ago, titled “Creativity, Inc.” by Ed Catmull, the founder and CEO of the very successful animation film studio, Pixar, and now also the head of Disney Animation.
The book is partly a biography and partly about film making. But it is mostly one of the best books on management in a long time. Many reviewers rightfully cite his wisdom, balance and humility and note that this book goes beyond the usual superficialities of most management books.
Catmull talks about how to run a company in a creative business, but it applies to many other situations. It certainly applies to the software and technology business, in general. It also applies to government.
One of the major themes of the book is that things will always go wrong and perfection is an elusive goal, even in companies that produce outstanding work. Leaders need to set the proper frame for all stakeholders.
To put this in the context of politics, a successful elected official I know has concluded that it’s not a good idea to go around (figuratively) wearing a white robe, touting your perfection. As soon as one small spot appears on that white robe, it will be noticed and condemned by everyone. Instead, it’s best to let the public know that you too are human and will make a few mistakes, but those mistakes are in the interest of making their lives better.
Catmull puts it this way:
“Change and uncertainty are part of life. Our job is not to resist them but to build the capability to recover when unexpected events occur. If you don’t always try to uncover what is unseen and understand its nature, you will be ill prepared to lead.” “Do not fall for the illusion that by preventing errors, you won’t have errors to fix. The truth is, the cost of preventing errors is often far greater than the cost of fixing them.”
The last point has a larger message: that success is less about the right way [the process] to fix a problem than actually fixing the problem.
“Don’t confuse the process with the goal. Working on our processes to make them better, easier, and more efficient is an indispensable activity and something we should continually work on— but it is not the goal. Making the product great is the goal.”
Government, in general, would do well to convert as many activities as it can from being processes to being projects, whose aim is to achieve clear and discrete results.
Along with many of us who have supported open innovation and citizen engagement, he points out that good ideas can come from anywhere inside or outside the organization:
“Do not discount ideas from unexpected sources. Inspiration can, and does, come from anywhere.”
And he adds that good managers don’t just look to employees for new solutions, but for help in an earlier stage – defining what the real problem is.
In government, you often hear the line that “information is power” and thus many leaders horde that information. Catmull, on the contrary, argues for the need for open communication:
“If there is more truth in the hallways than in meetings, you have a problem. Many managers feel that if they are not notified about problems before others are or if they are surprised in a meeting, then that is a sign of disrespect. Get over it.”
Of course, actually having good communications isn’t any easier in government than it is anywhere else. Catmull suggests that it is the top leaders who have to make the major effort for good communications to occur and it is in their own interest. How many times have you been blindsided by something that others knew was a problem, but didn’t reach you until it was a full-fledged crisis?
“There are many valid reasons why people aren’t candid with one another in a work environment. Your job is to search for those reasons and then address them. … As a manager, you must coax ideas out of your staff and constantly push them to contribute.”
This brief review doesn’t do justice to the depth of the book. And I’m sure that many public officials could draw more parallels than I have.
Clearly the government would run better, the public would be better served and public officials would be more successful if creativity ruled in the public sector as well as it has at Pixar.
The National Association of Counties just concluded its annual mid-winter Legislative Conference in Washington, DC. I was there in my role as NACo’s first Senior Fellow.
As usual, its Chief Innovation Officer, Dr. Bert Jarreau, created a three-day extravaganza devoted to technology and innovation in local government.
The first day was a CIO Forum, the second day NACo’s Technology Innovation Summit and the final day a variety of NACO committees on IT, GIS, etc.
County government – especially the best ones – get too little recognition for their willingness to innovate, so I hope this post will provide some information about what county technologists and officials are discussing.
One main focus of the meetings was on government’s approach to technology and how it can be improved.
Jen Pahlka, founder and Executive Director of Code For America and former Deputy Chief Technology Officer in the White House, made the keynote presentations at both the CIO Forum on Friday and the Tech Summit on Saturday – and she was a hit in both.
She presented CfA’s seven “Principles for 21st Century Government”. The very first principle is that user experience comes before anything else. The use of technology is not, contrary to some internal views, about “solving” some problem that the government staff perceive.
She pointed out that the traditional lawyer-driven design of government services actually costs more than user-centric design. (I’ll have more on design in government in a future blog post.)
She referred to the approach taken by the United Kingdom’s Digital Service. For more about them, see https://gds.blog.gov.uk/about/ When she was in the White House, she took this as a model and helped create a US Digital Service.
She also discussed the importance of agile software development. She suggested that governments break up their big RFPs into several pieces so that smaller, more entrepreneurial and innovative firms can bid. This perhaps requires a bit more work on the part of government agencies, but they would be rewarded with lower costs and quicker results.
More generally she drew a distinction between the traditional approach that assumes all the answers – all the requirements for a computer system – are known ahead of time and an agile approach that encourages learning during the course of developing the software and changing the way an agency operates.
By way of example, she discussed why the Obamacare website failed. It used the traditional, waterfall method, not an agile, iterative approach. It didn’t involve real users testing and providing feedback on the website. And, despite the common wisdom to the contrary, the development project was too big and over-planned.
It was done in a way that was supposed to reduce risk, but instead was more risky. So she asked the NACo members to redefine risk, noting that yesterday’s risky approach is perhaps today’s prudent approach.
Helping along is the development of cloud computing. So Oakland County (Michigan) CIO Phil Bertolini has found that cloud computing is reducing government’s past dependence on big capital projects to deploy new technology, thus allowing for more day-to-day agility.
Finally Jen Pahlka suggested that government systems needed to be more open to integration with other systems. In a phrase, “share everything possible to share”. She showed an example where the government let Yelp use government restaurant inspection data and in turn learn about food problems from Yelp users. (And, of course, sharing includes not just data, but also software and analytics.)
In another illustration of open innovation in the public sector, Montgomery County, MD recently created its Thingstitute as an innovation laboratory where public services can be used as a test bed for the Internet of Things. Even more examples were discussed in the IT Committee. Maricopa County, Arizona and Johnson County, Kansas, both now offer shared technology services to cities and nearby smaller counties. Rita Reynolds, CIO of the Pennsylvania County Commissioners Association, discussed the benefits of adopting the NIEM approach to data exchanges between governments.
The second major focus of these three days was cybersecurity.
Dr. Alan Shark, Executive Director of PTI, started off by revealing that latest surveys show security is the top concern for local government CIOs for the first time. Unfortunately, many don’t have the resources to react to the threat. Actually, it’s more a reality than merely a threat. It was noted that, on average, it takes 229 days for organizations to find out they’ve been breached and that close to 100% have been attacked or hacked in some way. It’s obviously prudent to assume yours too has been hacked.
Jim Routh, Chief Information Security Officer (CISO) of Aetna insurance recommended a more innovative approach to responding to cybersecurity threats. He said CIOs should ignore traditional advice to try to reduce risk. Instead “take risks to manage risk”. (This was an interesting, if unintentional, echo of Jen Pahlka’s comments about software development.)
Along those lines, he said it is better to buy less mature cybersecurity products, in addition to or even instead of the well-known products. The reason is that the newer products address new issues better in an ever changing world and cost less.
There was a lot more, but these highlights provide plenty of evidence that at least the folks at NACo’s meetings are dealing with serious and important issues in a creative way.
“Both Kansas City, Mo., and Kansas City, Kan., have Google Fiber, a high-speed fiber-optic network, and are having a hard time figuring out what to do with so much power.”
Considering the woe and anxiety of the people that the reporter interviewed in these two cities, you might call this the angst of the gig cities. I’m not normally critical in these blog posts, but for those of us without gigabit connections to the world, this angst doesn’t generate much sympathy and makes us wonder about the thought process of some folks.
Let’s start with the headline that bemoans the fact that there is no single killer app yet to justify the gigabit bandwidth, but that they are still looking for one. Back in the days when PCs were first introduced, supposedly the spreadsheet was the killer app that sold those computers. And graphics was the “killer app” that sold the Mac originally.
But I’m not sure there is any single killer app for a fundamental technology like communications. Was there one thing that drove increased phone usage 50 years ago? Was there only one “app” that drove people to the web more recently?
The story also had this observation:
“[The] managing director of the KC Digital Drive, a nonprofit that is trying to figure out new ways to use Google Fiber, said people were expecting too much. So instead of something otherworldly, [he] said the more likely outcome would be souped-up versions of things that already existed.”
How sad. To use an analogy, even though they’re driving high-powered new cars, they’re talking and thinking “horseless carriage”, not sports car.
I can’t believe the communities that are complaining they don’t know what to do with gig lack imagination, but that’s the way it comes across in this article. Surely there are creative people in Kansas City – not just software developers – and they ought to be challenged to come up with many ways to wow the rest of the residents.
The article goes into a bit of an aside about the various ways cities have deployed broadband – Google Fiber, conventional telecommunications providers and home grown. I haven’t seen enough research about these Google Fiber cities or other cities that have accomplished a similar build-out by themselves.
Perhaps, though, the problem of not knowing what to do with gigabit connections is greater in places where the community didn’t have to organize itself as much in order to get that bandwidth. By contrast, cities, like Chattanooga, which had to work harder to build out its own network perhaps have deeper cultures of innovation and entrepreneurship – which is why they supported their own gigabit build-out to begin with.
There’s also a big gap between a gigabit connection and the more typical few megabits that most Americans seem to witness much of the time. I suppose that’s also part of the gig cities’ problem. Perhaps they are feeling lonely. It’s a bit like being the only person in town with a phone in the old days.
Maybe the new gig cities would find more things to do if they’d only begin to connect to other Americans at even a tenth of that speed.
This post is about some of the more interesting and unusual news items that provide continuing evidence of the way that online collaboration is upending old ways of doing things in several domains.
In the past, we’ve depended upon social and behavioral scientists, news media, and other authoritative figures to assess our collective emotional state. Now there’s the WeFeel project of Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO). As CSIRO describes it:
We Feel is a project that explores whether social media—specifically Twitter—can provide an accurate, real-time signal of the world’s emotional state.
Hundreds of millions of tweets are posted every day. … We Feel is about tapping that signal to better understand the prevalence and drivers of emotions. We hope it can uncover, for example, where people are most at risk of depression and how the mood and emotions of an area/region fluctuate over time. It could also help understand questions such as how strongly our emotions depend on social, economic and environmental factors such as the weather, time of day, day of the week, news of a major disaster or a downturn in the economy.
Another domain which has more obviously been dominated by experts is medicine. While many hospitals and physicians are still working out their systems for electronic health records and billing in a changed insurance environment, patients are not waiting. Nor are various businesses – as we are already seeing an onslaught of wearable devices to help people track health from both large established companies and startups.
Going beyond health tracking to health management and finding a way to bring in medical expertise when it’s really needed is the next step, although not a simple matter. But uMotif is tackling the issue. As they say:
Health systems across the world are under increasing pressure. The demands are rising, but resources often can’t keep pace. One way to help relieve the pressure is for people to engage more in their own health. Taking greater control, ownership and responsibility for keeping well.
[uMotif offers] Software for health self-management and shared decision making, supporting patients and clinicians; strengthening relationships; improving healthcare.
And then there’s the Longitude Prize, which was created in the 18th century by the British government. The winner had to create a workable way to determine a ship’s longitude.
In a sequel to that original prize, there is now in the UK a new Longitude Prize 2014. But instead of an official body determining the topic, this being the 21st century, the Longitude Committee used crowdsourcing and asked the public to submit ideas.
The public’s choice of a new challenge?
“In order to tackle growing levels of antimicrobial resistance, the challenge set for the Longitude Prize is to create a cost-effective, accurate, rapid and easy-to-use test for bacterial infections that will allow health professionals worldwide to administer the right antibiotics at the right time.
Reading this, many observers might make the traditional assumption that the challenge aims to encourage heavy thinking by experts in biology, disease, DNA and the like. But the Longitude Committee states right up front on their website:
Now that the antibiotics challenge has been chosen, we want everyone, from amateur scientists to the professional scientific community, to try and solve it.
Nesta [the National Endowment for Science Technology and the Arts in the UK] and the Longitude Committee are finalising the criteria for how to win the £10 million prize, and from the autumn you will be able to submit your entries.
I’ve previously described the success that Zooniverse has had in amateur science, but the Longitude Committee has upped the ante considerably by offering such a large prize. Good luck to all my readers!
For three days, there was a special focus on technology and more interesting presentations than I can summarize here. Sometime next week, you will be able to see video of Saturday’s Innovation and Technology Summit at NACo.org.
Here are some of my observations:
The VP of the Maui Economic Development described their strategy. I cheered when she said that, notwithstanding the traditional incentives and approaches of economic development, the most important thing is to “grow your own”. She went on to describe how they are focused on workforce development and all kinds of creative, only-in-Hawaii learning opportunities. But much of that targeted children. In an economy where adults need to keep refreshing their skills and knowledge until well past what you used to be retirement age, adults also need access to learning opportunities.
The Directors of the Health and Human Services Departments of both Montgomery County, Maryland and San Diego County, California both focused on outcomes. This too is an important step forward beyond the usual output measures that have dominated performance data in government. Montgomery County also puts as much emphasis on social return on investment as on pure financial return on investment.
One other part of the San Diego presentation caught my attention: that counties need to lead the “higher levels” of government. In the face of Federal government dysfunction for the last several years, most local and state governments have taken the approach of go ahead without waiting for the Feds to take action. So we’ve seen much more innovation at the sub-national level than at the national level. Now it seems that some sub-national governments are actively upending the pyramid of power and hoping to guide the Federal government to a more innovative posture.
There was a keynote speech by a White House staffer on open data and much discussion of open data on various panels. Rich Leadbeater of ESRI rightly pointed out that “open data is not an end in itself. It’s what you do with it.” This is a refreshing attitude since too many governments seem to spend a lot of time congratulating themselves for making the data available on the Internet and leaving things at that.
Some governments have encouraged private companies to develop apps with this data. Curiously, those governments have not usually embedded the apps into their own systems so these companies are left on their own to get citizens to know about them. Worse, too many government think that asking private companies to create these apps absolves them of their own responsibility. The reality is that not all the applications that are needed or can be developed with open data will generate the revenue a private company seeks, but those apps are still useful for the public too have. The only way they will be created is if the government does the development itself or pays for the app to be developed. Considering that the costs of software development have gone down considerably over the past decade, this is not something that can easily be dismissed as out of budget.
In my end-of-day review and commentary on the sessions, I offered my reaction to the data being put out on the web – “TMI, TLK”. Too much information, too little knowledge. Governments should recognize that they and their constituents have to start working together to make sense of all that data and use it to make improvements in policies and programs.
There have been recent articles featuring primarily Sebastian Thrun, the earlier leader of massive open online courses (MOOCs) and founder of the company, Udacity, which specializes in developing and delivering MOOCs.
The first was a piece in Fast Company about how Thrun has been disappointed by the experience of MOOCs. This was followed by a more positive piece in the New York Times about changes in MOOCs that are being considered in order to address their failures. The failures turn out to be the small percentage of people who actually attend the full course and the fact that most of them already have degrees.
However, the discussion might be misleading. It not so much whether online courses are good or bad, but how it is very difficult to succeed with a new innovation by casting it as a minor modification of something that already exists. In this case, the idea that online learning should be very much like a typical college course, but just online, may not have been an innovative enough idea. For example, the Khan Academy, which packages learning into ten minute videos that anyone can access, is a much greater change from convention and has also been much more successful.
Indeed, the fact that many in the MOOCs already have degrees maybe should make MOOC developers reconsider their target. Perhaps MOOCs will be much more appealing as a cost-effective means of lifelong learning for those who cannot afford the time or additional money to attend college than for those who would be college students.
In a knowledge age, the biggest challenge is how to provide learning opportunities for all adults – all of whom need to continue to learn.
(Disclosure: While this blog has had previous posts on higher education, it is now more relevant since I was recently appointed to the board of the Westchester Community College. Of course, my views do not represent those of the College now, or as it may turn out, even in the future 😉
The National Association of Counties’ Large Urban County Caucus – LUCC, as it is known – represents the largest counties in the country, where a significant percentage of Americans live. LUCC held its 2013 County Innovation Symposium in New York City last week from Wednesday through Friday.
(I was invited in my new role as the first Senior NACo Fellow.)
Although Thursday’s schedule included sessions on health care, criminal justice and resilience, the meeting on the other two days focused on economic development. Bruce Katz of the Brookings Institution’s Metropolitan Policy Program and co-author of the recent book, “The Metropolitan Revolution: How Cities and Metros Are Fixing Our Broken Politics and Fragile Economy” kicked off Friday morning.
He and other panelists noted the evolving role of counties and NACo itself, as the old suburban vs. urban disputes are overtaken by important socio-economic trends.
First, there is an increased understanding and recognition among public officials now of the metropolitan, really regional, nature of economies. The old game of providing incentives to companies to move within a metropolitan area, resulting in no new jobs in the region, is wearing thin.
Second, the global nature of the economy implies that regions are now competing with each other, not localities. And only a regional scale can generate the funds necessary to compete on a global basis.
Third, the demographic differences that used to separate suburban and urban areas are diminishing. The two are beginning to look a lot alike. Brookings’ research indicates that today there are more poor people in suburbs than in cities.
Along with this discussion of economic strategy, there was a strong interest in encouraging innovation and in learning how to get good innovations to diffuse quickly. This interest is one reason why NACo has appointed Dr. Bert Jarreau as its first Chief Innovation Officer.
With that in mind, the group went to visit Google’s New York labs. (It is interesting to see Google’s entry into the sub-national arena over the last year or so, as more traditional IT companies have withdrawn somewhat from this market.)
A predictable big hit was the demonstration of Google Glass and a discussion of Glass apps, called GlassWare, that might be of value in the public sector.
There were also presentations of two applications that were extensions of Google’s search and other tools. One was for integrated predictive policing, with heavy use of video cams (both public and private) and unstructured, narrative data. Similarly, Macomb County, MI (population 900,000) showed how it uses a search tool, called SuperIndex, for text and images of land records. The latter, by the way, is financially self-supporting.
By the end of the meeting, NACo LUCC decided they will make this innovation symposium an annual event. It is often these kinds of unexpected, under-the-radar, developments that surprise people later. County governments has not had a reputation for innovation, but keep your eyes open for what develops with this group.
A few weeks ago, the New York Times had a story about HP and its telecommuters – “Back-to-Work Day at H.P.” While not quite calling for an end to telecommuting as Yahoo done earlier this year, HP said they had added space and “invited” its employees back to the office. Once again it seemed that a big tech company was doing a decidedly untech thing – downplaying the use of technology and pointing out how it can’t really substitute for old fashioned patterns of interaction.
How do tech companies expect people to believe them, if their words don’t match their actions?
While the current technology for virtual interactions and a virtual workforce can certainly be improved, it’s not the major obstacle anymore. A more important part of the disconnect between words and actions is that these tech companies are engineering leaders, but not leaders in organizational culture – and it is culture that is the real hurdle here.
Tech and non-tech companies that want to ensure success for their virtual workforce need to build an appropriate culture and practices.
For example, everyone involved with telecommuting needs to understand that email, text, even phone calls constitute only a small part of the communications that human beings expect and is insufficient to support a high level of trust. However, video chatting does enable people to get much of what would be communicated in person and has been shown to enhance trust. So video ought to be the rule, not the exception, for virtual interaction.
Another important part of the culture of innovative companies is the encouragement of random interactions and collaboration among people. This is what underlies the Three C’s which Tony Hsieh of Zappo’s emphasizes: collision, community and co-learning.
He clearly believes that this is only possible in a physical environment. But these three C’s can also be well supported in a virtual environment, if the company sets up that environment for such collisions and makes it a part of its everyday culture. Indeed, the range of people who can interact easily in the virtual workforce is much greater than in a physical office.
The company also needs to ensure that telecommuters don’t feel their chance of career advancement is dramatically diminished unless they show up at the office and hobnob with the right executives. The article “Creating an Organizational Culture that Supports Telework” relates a good example of this situation, along with good general guidance on the positive actions that companies need to take.
In sum, as James Surowiecki wrote earlier this year in the New Yorker:
“At companies with healthier corporate cultures, it [telecommuting] often works well, and [former head of Xerox PARC] Seely Brown has shown how highly motivated networks of far-flung experts — élite surfers, say — use digital technologies to transmit knowledge much as they would in person.”
Building a 21st century culture of successful virtual interaction won’t come easily to companies that developed their more traditional culture in the 20th century. But in an increasingly virtual and mobile world, it will be necessary for the HPs, Yahoos, and others to flourish.