More Than A Smart City?

The huge Smart Cities New York 2018 conference started today. It is billed as:

“North America’s leading global conference to address and highlight critical solution-based issues that cities are facing as we move into the 21st century. … SCNY brings together top thought leaders and senior members of the private and public sector to discuss investments in physical and digital infrastructure, health, education, sustainability, security, mobility, workforce development, to ensure there is an increased quality of life for all citizens as we move into the Fourth Industrial Revolution.”

A few hours ago, I helped run an Intelligent Community Forum Workshop on “Future-Proofing Beyond Tech: Community-Based Solutions”. I also spoke there about “Technology That Matters”, which this post will quickly review.

As with so much of ICF’s work, the key question for this part of the workshop was: Once you’ve laid down the basic technology of broadband and your residents are connected, what are the next steps to make a difference in residents’ lives?

I have previously focused on the need for cities to encourage their residents to take advantage of the global opportunities in business, education, health, etc. that becomes possible when you are connected to the whole world.

Instead in this session, I discussed six steps that are more local.

1. Apps For Urban Life

This is the simplest first step and many cities have encouraged local or not-so-local entrepreneurs to create apps for their residents.

But many cities that are not as large as New York are still waiting for those apps. I gave the example of Buenos Aires as a city that didn’t wait and built more than a dozen of its own apps.

I also reminded attendees that there are many potential, useful apps for their residents which cannot justify enough profit to be of interest to the private sector, so the government will have to create these apps on their own.

2. Community Generation Of Urban Data

While some cities have posted their open data, there is much data about urban life that the residents can collect. The most popular example is the community generation of environmental data, with such products like the Egg, the Smart Citizen Kit for Urban Sensing, the Sensor Umbrella and even more sophisticated tools like Placemeter.

But the data doesn’t just have to be about the physical environment. The US National Archives has been quite successful in getting citizen volunteers to generate data – and meta-data – about the documents in its custody.

The attitude which urban leaders need is best summarized by Professor Michael Batty of the University College London:

“Thinking of cities not as smart but as a key information processor is a good analogy and worth exploiting a lot, thus reflecting the great transition we are living through from a world built around energy to one built around information.”

3. The Community Helps Make Sense Of The Data

Once the data has been collected, someone needs to help make sense of it. This effort too can draw upon the diverse skills in the city. Platforms like Zooniverse, with more than a million volunteers, are good examples of what is called citizen science. For the last few years, there has been OpenData Day around the world, in which cities make available their data for analysis and use by techies. But I would go further and describe this effort as “popular analytics” – the virtual collaboration of both government specialists and residents to better understand the problems and patterns of their city.

4. Co-Creating Policy

Once the problems and opportunities are better understood, it is time to create urban policies in response.  With the foundation of good connectivity, it becomes possible for citizens to conveniently participate in the co-creation of policy. I highlighted examples from the citizen consultations in Lambeth, England to those in Taiwan, as well as the even more ambitious CrowdLaw project that is housed not far from the Smart Cities conference location.

5. Co-Production Of Services

Then next is the execution of policy. As I’ve written before, public services do not necessarily always have to be delivered by paid civil servants (or even better paid companies with government contracts). The residents of a city can help be co-producers of services, as exemplified in Scotland and New Zealand.

6. Co-Creation Of The City Itself

Obviously, the people who build buildings or even tend to gardens in cities have always had a role in defining the physical nature of a city. What’s different in a city that has good connectivity is the explosion of possible ways that people can modify and enhance that traditional physical environment. Beyond even augmented reality, new spaces that blend the physical and digital can be created anywhere – on sidewalks, walls, even in water spray. And the residents can interact and modify these spaces. In that way, the residents are constantly co-creating and recreating the urban environment.

The hope of ICF is that the attendees at Smart Cities New York start moving beyond the base notion of a smart city to the more impactful idea of an intelligent city that uses all the new technologies to enhance the quality of life and engagement of its residents.

© 2018 Norman Jacknis, All Rights Reserved

When Strategic Thinking Needs A Refresh

This year I created a new, week-long, all-day course at Columbia University on Strategy and Analytics. The course focuses on how to think about strategy both for the organization as a whole as well as the analytics team. It also shows the ways that analytics can help determine the best strategy and assess how well that strategy is succeeding.

In designing the course, it was apparent that much of the established literature in strategy is based on ideas developed decades ago. Michael Porter, for example, is still the source of much thinking and teaching about strategy and competition.

Perhaps a dollop of Christensen’s disruptive innovation might be added into the mix, although that idea is not any longer new. Worse, the concept has become so popularly diluted that too often every change is mistakenly treated as disruptive.

Even the somewhat alternative perspective described in the book “Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant” is now more than ten years old.

Of the well-established business “gurus”, perhaps only Gary Hamel has adjusted his perspective in this century – see, for example, this presentation.

But the world has changed. Certainly, the growth of huge Internet-based companies has highlighted strategies that do not necessarily come out of the older ideas.

So, who are the new strategists worthy of inclusion in a graduate course in 2018?

The students were exposed to the work of fellow faculty at Columbia University, especially Leonard Sherman’s “If You’re in a Dogfight, Become a Cat! – Strategies for Long-Term Growth” and Rita Gunther McGrath’s “The End Of Competitive Advantage: How To Keep Your Strategy Moving As Fast As Your Business”.

But in this post, the emphasis in on strategic lessons drawn from this century’s business experience with the Internet, including multi-sided platforms and digital content traps. For that there is “Matchmakers – the new economics of multisided platforms” by David S Evans and Richard Schmalensee. And also Bharat Anand’s “The Content Trap: A Strategist’s Guide to Digital Change”.

For Porter and other earlier thinkers, the focus was mostly on the other players that they were competing against (or decided not to compete against). For Anand, the role of the customer and the network of customers becomes more central in determining strategy. For Evans and Schmalensee, getting a network of customers to succeed is not simple and requires a different kind of strategic framework than industrial competition.

Why emphasize these two books? It might seem that these books only focus on digital businesses, not the traditional manufacturers, retailers and service companies that previous strategists worked at.

But many now argue that all businesses are digital, just to varying degrees. For the last few year we’ve seen the repeated headline that “every business is now a digital business” (or some minor variation) from Forbes, Accenture, the Wharton School of the University of Pennsylvania, among others you may not have heard of. And about a year ago, we read that “Ford abruptly replaces CEO to target digital transformation”.

Consider then the case of GE, one of the USA’s great industrial giants, which offers a good illustration of the situation facing many companies. A couple of years ago, it expressed its desire to “Become a Digital Industrial Company”. Last week, Steve Lohr of the New York Times reported that “G.E. Makes a Sharp ‘Pivot’ on Digital” because of its difficulty making the transition to digital and especially making the transition a marketing success.

At least in part, the company’s lack of success could be blamed on its failure to fully embrace the intellectual shift from older strategic frameworks to the more digital 21st century strategy that thinkers like Anand, Evans and Schmalensee describe.

© 2018 Norman Jacknis, All Rights Reserved

Too Many Unhelpful Search Results

This is a brief follow up to my last post about how librarians and artificial intelligence experts can
get us all beyond mere curation and our frustrations using web search

.

In their day-to-day Google searches many people end up frustrated. But they assume that the problem is their own lack of expertise in framing the search request.

In these days of advancing natural language algorithms that isn’t a very good explanation for users or a good excuse for Google.

We all have our own favorite examples, but here’s mine because it directly speaks to lost opportunities to use the Internet as a tool of economic development.

Imagine an Internet marketing expert who has an appointment with a local chemical engineering firm to make a pitch for her services and help them grow their business. Wanting to be prepared, she goes to Google with a simple search request: “marketing for chemical engineering firms”. Pretty simple, right?

Here’s what she’ll get:

She’s unlikely to live long enough to read all 43,100,000+ hits, never mind reading them before her meeting. And, aside from an ad on the right from a possible competitor, there’s not much in the list of non-advertising links that will help her understand the marketing issues facing a potential client.

This is not how the sum of all human knowledge – i.e., the Internet – is supposed to work. But it’s all too common.

This is the reason why, in a knowledge economy, I place such a great emphasis on deep organization, accessibility and relevance of information.

© 2017 Norman Jacknis, All Rights Reserved

What Comes After Curation?

[Note: I’m President of the board of the Metropolitan New York Library Council, but this post is only my own view.]

A few weeks ago, I wrote about the second chance given to libraries, as Google’s role in the life of web users slowly diminishes. Of course, for at least a few years, one of the responses of librarians to the growth of the digital world has been to re-envision libraries as curators of knowledge, instead of mere collectors of documents. It’s not a bad start in a transition.

Indeed, this idea has also been picked up by all kinds of online sites, not just libraries. Everyone it seems wants to aggregate just the right mix of articles from other sources that might interest you.

But, from my perspective, curation is an inadequate solution to the bigger problem this digital knowledge century has created – we don’t have time to read everything. Filtering out the many things I might not want to read at all doesn’t help me much. I still end up having too much to read.

And we end up in the situation summed up succinctly by the acronym TL;DR, too long, didn’t read. (Or my version in response to getting millions of Google hits – TMI, TLK “too much information, too little knowledge”.)

The AQAINT project (2010) of the US government’s National Institute of Standards and Technology (NIST) stated this problem very well:

“How do we find topically relevant, semantically related, timely information in massive amounts of data in diverse languages, formats, and genres? Given the incredible amounts of information available today, merely reducing the size of the haystack is not enough; information professionals … require timely, focused answers to complex questions.”

Like NIST, what I really want – maybe what you want or need too? – is someone to summarize everything out there and create a new body of work that tells me just what I need to know in as few words as possible.

Researchers call this abstractive summarization and this is not an easy problem to solve. But there has been some interesting work going on in various universities and research labs.

At Columbia University, Professor Kathleen McKeown and her research colleagues developed “NewsBlaster” several years ago to organize and summarize the day’s news.

Among other companies, Automated Insights has developed some practical solutions to the overall problem. Their Wordsmith software has been used, for example, by the Associated Press “to transform raw earnings data into thousands of publishable stories, covering hundreds more quarterly earnings stories than previous manual efforts”.

For all their clients, they claim to produce “over 1.5 billion narratives annually”. And these are so well done that the New York Times had an article about it that was titled “If An Algorithm Wrote This, How Would You Even Know?”.

The next step, of course, is to combine many different data sources and generate articles about them for each person interested in that combination of sources.

Just a few months ago, Salesforce’s research team announced a major advance in summarization. Their motivation, by the way, is the same as mine:

“In 2017, the average person is expected to spend 12 hours and 7 minutes every day consuming some form of media and that number will only go up from here… Today the struggle is not getting access to information, it is keeping up with the influx of news, social media, email, and texts. But what if you could get accurate and succinct summaries of the key points…?”

Maluuba, acquired by Microsoft, has been continuing earlier research too. As they describe their research on “information-seeking behaviour”:

“The research at Maluuba is tackling major milestones to create AI agents that can efficiently and autonomously understand the world, look for information, and communicate their findings to humans.”

Librarians have skills that can contribute to the development of this branch of artificial intelligence. While those skills are necessary, they aren’t sufficient and a joint effort between AI researchers and the library world is required.

However, if librarians joined in this adventure, they could also offer the means of delivering this focused knowledge to the public in a more useful way than just dumping it into the Internet.

As I’ve blogged a few months ago:

Librarians have many skills to add to the task of “organizing the world’s information, and making it universally accessible”. But as non-profit organizations interested in the public good, libraries can also ensure that the next generation of knowledge tools – surpassing Google search – is developed for non-commercial purposes.

So, what comes after everyone has tried curation? Abstractive summarization aided by artificial intelligence software, that’s what!

© 2017 Norman Jacknis, All Rights Reserved

The War Is Over! Big Cities Win

In Tuesday’s New York Times, there was an article explaining “Why Big Cities Thrive, and Smaller Ones Are Being Left Behind” as the headline put it. The article was filled sad stories about small cities and small metro areas facing “dismal performance”. It was said that, in the face of a global technological revolution, these cities “may be too small to [adapt and to] survive.”

The accompanying graphic is a vivid demonstration of the point that the economic war is over and big cities have won – with their huge urban concentrations of people. And so, the author of the article ends it with advice that

“the future for the residents of small-city America looks dim. Perhaps the best policy would be to help them move to a big city nearby.”

There is no doubt that the graphic image is correct and that many small cities, towns and rural areas have suffered economically over the last couple of decades.

How could this have happened when the Internet and technology was supposed, instead, to “kill distance” and diminish the importance of big cities?  

I have argued that we are not really in the Internet age, despite – or because – of all the chatting, social media and email. A virtual version of the kind of casual conversations and interactions that happen in cities is still missing. The way Internet technology is used today limits our interactions. But that situation won’t last forever as more people, including those outside of the big metro areas, finally do get and use ubiquitous, easy and transparent videoconferencing.

This reminds me of my experience with the impact of the web on newspapers. When the web was first becoming popular, I was with a company working on software that was intended in part to help newspapers make the transition to a digital world. Although we weren’t successful in getting most newspapers to respond to the challenge (and opportunity), I was witness to the online discussions of newspaper employees as they struggled with the web phenomenon.

Through most of the 1990s, they were mildly concerned about the threat. When the dot-com bubble burst during 2000, these folks reassured each other that this web thing was indeed a passing fad. Shortly after that widespread agreement that the predictions of the impact of technology were mistaken, newspapers starting to decline and shed staff.

Bill Gates has provided another way of looking at this:

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

A hint that there is more to the story could be found the same day in another New York Times article by their long-time technology reporter Steve Lohr. The story, “Start-Up Bets on Tech Talent Pipeline From Africa”, reported about a pool of tech talent in Lagos, Nairobi and Kampala. While Lagos and Nairobi are fairly large cities, none of these three cities is what people normally think about when talking about the metro areas that are the flagships of the global economy. They are not New York or London or San Francisco.

Yesterday (10/11/17) another NY Times story, “As ‘Unicorns’ Emerge, Utah Makes a Case for Tech Entrepreneurs”, appeared about the

“thriving technology hub in the roughly 80-mile swath from Provo to Ogden, with Salt Lake City in between. The region has given rise to at least five companies valued at more than $1 billion.”

Among those featured was Domo, an analytics company based in American Fork, Utah.

At ICF, we’ve seen a number of small cities and other non-metro areas that have flourished by taking advantage of the Internet and using broadband to connect their residents to anyone in the world.

While the wealth and advantages large metros have inherited from the industrial age are still being reflected in their role today, as we continue into this century, the intelligent use of technology to build thriving communities and quality of life will help cities of any size. So perhaps the obituary of small towns is not just premature, but misleading.

© 2017 Norman Jacknis, All Rights Reserved

Libraries As Platforms For Big Data

The yearlong theme of the New York State Regents Technology Policy and Practice Council (TPPC) is data.  Given the Regents’ responsibility for education, the council’s focus is on data in education, but not just data arising from schools. Beyond education, they are thinking about data that is or could be offered through libraries, museums, libraries, public broadcasting, and the like.

With this background, Nate Hill, Executive Director of the Metropolitan New York Library Council and I (in my role at METRO’s board president) have been asked to make a presentation on this subject when the group meets today. That is partly because of METRO’s role as the umbrella organization for all kinds of libraries, museums, archives and, more generally, information professionals in the New York area.

They also want to know about METRO’s leading role in working on data and digital content, even open data. (And Nate Hill’s work on an open data platform at the Chattanooga Public Library, before he came to New York, is also relevant.)

Of course, this is not a new subject to me either as I wrote more than three years ago in “What Is The Role Of Libraries In Open Government?

Here in a nutshell are some of the main ideas that we are presenting today:

-> There has already been the start of big data and analytics in K-12 education. Unfortunately, all of the tests that kids take is one manifestation of this application of analytics. But there are other good sources of data for the classroom, like that supplied by NOAA.

image

->

Data has another use, however. It can motivate students and encourage them to be curious. How? If instead of using the standard, remote examples in texts for most subjects, the examples were drawn from data collected and about their own community, where they live.

image

->

Drawing on themes from my Beyond Data talk in Europe, “Is Open Data Good Enough?”, it’s important not to just depend upon the data that some governments publish on their websites. There is a world of data that is of public interest, but is not collected by governments. And data alone isn’t insightful – for that, analytics and human inquiry are necessary, both of which students and older scholars can provide.

->

Libraries have been the curators of digital content and increasingly can be the creators, as well. Whether this is through mashups or linked data or the application of their own analytics skills, libraries will be extending and making more useful the raw data that has already been made public.

image

->

Libraries have historically been community centers where issues could be discussed in an objective manner. But when so many people are not satisfied with merely being consumers of content and instead act as producer-consumer, libraries can offer the intellectual resources, the tools and the platform for citizens to play a role in investigating data on public issues and in co-creating the solutions.

image

Our hope is that METRO can help to show the future paths for the open data movement in all of its venues and, maybe even provide the platform we envision in our talk today. If you’d like to join in this effort, please contact Nate Hill or myself.

image

© 2017 Norman Jacknis, All Rights Reserved

A Guide Book For Newcomers To The Connected Countryside

In the 19th century, there was a movement to reject the then developing modern industrial society and live in nature as isolated individuals.
Henry David Thoreau’s “Walden; or, Life in the Woods” provides a bit of the flavor of that world view.

That’s clearly not our world. In the past several years, a massive global migration to cities has been part of the conventional wisdom and has been frequently cited in various media.

But less noticed, although as important, is the exodus by young professionals from cities to the countryside. In various workshops, presentations and blog posts, I’ve pointed out that while many rural areas are dying, others are flourishing.

These newcomers to the countryside are not going there to reject our technological world. Instead, the 21st century world and its advanced communications capabilities are what make it possible for them to live in a place where they prefer the quality of life. They are living in the countryside, but very much a part of the world.

A Stateline article, “Returning to the Exurbs: Rural Counties Are Fastest Growing”, highlights the relevant comments of Professor Joel Garreau, comparing this to the earlier exurbs of three decades ago:

“workers no longer need to be confined to one place. Professionals with certain skills could work and live where they wanted—and the world would come to them in the guise of a brown UPS truck … today’s urban exiles aren’t looking for a lengthy commute from the far suburbs to a downtown office. … They’re looking to completely eradicate the notion of commuting to work and toiling from 9 to 5.”

Moreover, this is a phenomenon in many technologically advanced nations, as noted four years ago in William van den Broek’s article, “Toward a digital urban exodus”.

One of the sites on the internet that documents the successes, challenges and hardships of these new residents is the Urban Exodus. It was created by one of them – Alissa Hessler, who moved from the high-tech center of Seattle to rural Maine several years ago. On the website, there are already stories of about 50 couples and individuals who made the move.

Recently, Hessler wrote what amounts to a thoughtful, practical and well-illustrated guide book based on what she has learned from her own and others’ experiences – “Ditch The City and Go Country: How to Master the Art of Rural Life From a Former City Dweller.”

This is required reading if you’re considering such a move full time, or even living in the countryside for 2, 3, 4 days a week.

If you’re a confirmed city dweller, it is still an interesting read. Especially the last two chapters on “earning a living” and “enjoying the good life” in the countryside can more sharply define the impact of technology and trends today than the cities that still are partly living off the inherited capital of the 20th century industrial age.

© 2017 Norman Jacknis, All Rights Reserved

The AI-Enhanced Library

I was asked to speak at the Cambridge (MA) Public Library’s annual staff development day a week ago Friday. Cambridge is one of the great centers of artificial intelligence (AI), machine learning and robotics research. So, naturally, my talk was in part about AI and libraries, going back to another post a year ago, titled “Free The Library”.

First, I put this in context – the increasing percentage of people who are part of the knowledge economy and have a continual need to get information, way beyond their one-time college experience. They are often dependent on googling what they need with increasingly frustrating results, in part because of Google’s dependence on advertising and keeping its users’ eye on the screen, rather than giving them the answers they need as quickly as possible.

Second, I reviewed the various ways that AI is growing and is not just robots handling “low level” jobs. Despite the portrait of AI’s impact as only affecting non-creative work, I gave examples of Ross “the AI lawyer”, computer-created painting, music and political speeches. I showed examples of social robots with understanding of human emotions and the kind of technologies that make this possible, such as RealEyes and Affectiva (another company near Cambridge).

Third, I pointed out how these same capabilities can be useful in enhancing library services.

This isn’t really new if you think about it. Libraries have always invented ways to organize information for people, but the need now goes beyond paper publications. Since their profession began, librarians have been in the business of helping people find exactly what they need, but the need is greater now.

Librarians have many skills to add to the task of “organizing the world’s information, and making it universally accessible”. But as non-profit organizations interested in the public good, libraries can also ensure that the next generation of knowledge tools – surpassing Google search – is developed for non-commercial purposes.

Not surprisingly, of course, the immediate reaction of many librarians is that we don’t have the money or resources for that!!

I reminded them of the important observation by Maureen Sullivan, former President of the American Library Association:

“With a nationally networked platform, library and other leaders will also have more capacity to think about the work they can do at the national level that so many libraries have been so effective at doing at the state and local levels.”

Thus, together, libraries can develop new and appropriate knowledge tools.  Moreover, they can – and should cooperate – in a way that is usually off-limits to private companies who want to protect their “intellectual property.”

And the numbers are impressive. If the 70,000+ librarians and 150,000+ other library staff in the USA worked together, they could accomplish an enormous amount. Individual librarians could specialize in particular subjects, but be available to patrons everywhere.

And if they worked with academic specialists in artificial intelligence in their nearby universities, such as MIT and Harvard in the case of Cambridge Public Library, they can help lead, not merely react quickly, to the future in a knowledge century.

The issue is not “either AI or libraries” but both reinforcing each other in the interest of providing the best service to patrons. Instead of being purely AI, artificial intelligence, this new service would what is beginning to be a new buzzword – IA, intelligence augmentation for human beings.

Nor am I the only one talking this way. Consider just three examples of articles in March and May of this year that come from different worlds, but make similar points about this proposed marriage:

I ended my presentation with a call for action and a reminder that this is a second – perhaps last – chance for libraries. The first warning and call to action came more than twenty years ago in 1995 at the 61st IFLA General Conference. It came from Chris Batt, then of the Croydon Libraries, Museum and Arts in the UK:

“What are the implications of all this [advancing digital world] for the future of public libraries? … The answer is that while we cannot be certain about the future for our services, we can and should be developing a vision which encompasses and enriches the potential of the Internet. If we do not do that, then others will; and they will do it less well.”

© 2017 Norman Jacknis, All Rights Reserved  @NormanJacknis

Creativity Versus Copyright

In the Industrial Age, the fight between labor and the owners of industry (“capital”) was the overarching political issue. As we move away from an industrial economy to one based on knowledge that debate is likely to diminish.

Instead, among the big battles to be fought in this century, will be about intellectual property — who controls it, who gets paid for it, how much they get paid, who owns it and whether ideas can properly be considered property in the same way we consider land to be property.

I’ve written about this before, but a recent story about the settlement of a suit by Star Trek was settled recently, as reported in the NY Times, brought this to mind, especially as I came across an interesting series of posts that provide some new perspectives.

image

These were written at the end of last year and the beginning of this year by the former chair of the Australian Film Critics Association, Rich Haridy.

His aim was to “examine how 21st century digital technology has given artists a set of tools that have dismantled traditional definitions of originality and is challenging the notions of copyright that came to dominate much of the 20th century.”

image

Here’s a quick, broad-brush summary of his argument for a more modern and fairer copyright system:

  • Not just in today’s digital world of remixes, but going back to Shakespeare and Bach and even before that, creative works have always been derivative from previous works. They clearly have originality, but no work is even close to being 100% original.
  • The tightening of copyright laws has undermined the original goal of copyrights — to encourage creativity and the spread of knowledge.
  • This reflects the failure of policy makers and the courts to understand the nature of creativity. This is getting worse in our digital world.
  • While the creators and distributors deserve compensation for their works, this shouldn’t be used as a reason to punish other artists who build and transform those works.
  • The enforcement is unequal. While bloggers and artists with limited financial means are easy targets for IP lawyers, the current system “while [theoretically] allowing for fair use, still privileges the rich and powerful, be they distributors or artists.”

It’s worth reading the series to understand his argument, which makes a lot of sense:

Haridy is not proposing destruction of copyrights. But if arguments, like his, are not heeded, don’t be surprised if more radical stances are taken by others — just as happened in the past in the conflict between labor and capital.

image

© 2017 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/157274980178/creativity-versus-copyright]

Blockchain And The Arts

Huh? Blockchain and the Arts?

If you’ve heard about blockchain at all, it is most likely because of Bitcoin, the alternative non-state sanctioned currency. But the uses of blockchain go beyond Bitcoin.

If you don’t know about blockchains, there are many sources of information about them, including Wikipedia. For a little, but not too much detail, I like this explanation:

“The Blockchain is a … database technology, a distributed ledger that maintains and ever growing list of data records, which are decentralised and impossible to tamper with. The data records, which can be a Bitcoin transaction or a smart contract or anything else for that matter, are combined in so-called blocks. In order to add these blocks to the distributed ledger, the data needs to be validated by 51% of all the computers within the network that have access to the Blockchain.

“The validation is done via cryptography, which means that a mathematical equation has to be solved … Once the validation is done, the Block will receive a timestamp and a so-called hash. This hash is then used to create the next block in the chain. If even one bit in the block changes, the hash will change completely and as a result, all subsequent blocks in the chain will change. Such a change has to be validated again by 51% of all the nodes in the network, which will not happen because they don’t have an incentive to work on ‘old’ blocks in the chain. Not only that, the blockchain keeps on growing, so you would require a tremendous amount of computing power to achieve that, which is extremely expensive. So it is simply not worth it to change any data. As a result, it is nearly impossible to change data that has been recorded on the Blockchain.”

image

The protection of the digital material from snoopers, the strong validation and the decentralization of blockchains are especially attractive.

The potential uses of blockchain are a hot area for venture investment. And there’s a cottage industry in consultants providing advice on the subject. One of the most well-respected gurus of the business world, Don Tapscott, just co-authored a book on the subject with his son Alex, who is CEO of a venture capital firm that specializes in blockchain companies. It’s called “Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, And The World”.

More than a year ago, R3, a consortium of banks and related companies from around the world – now numbering about four dozen – started to develop their own blockchain.  

Sure, it’s understandable that bankers are interested in this technology. But artists?

I suppose some artist will, at some point, figure out how to use blockchain as a new art form – dropping little pieces of an artistic puzzle in a chain. But that’s not what the usual interest is about.

Instead, since the digital age started, quickly followed by widespread digital piracy then reduced incomes for many artists, people have wondered how artists will be able to continue their artistic work – the long history of “starving artists” aside.

Blockchains have been gaining adherents as a way to help establish ownership and subsequent payment for use.

In their book, the Tapscotts describe a virtual nirvana for artists, built atop blockchains which would enable artists to register their works, enter into “smart contracts” and generally be at the center of the creative ecosystem, rather than as the lowest person on the totem pole.

Last week, Don Tapscott reiterated the point in “Blockchain Could Be Music’s Next Big Disruptor – Artists can finally get what they deserve”.

Daniel Cawrey made a similar argument in his article, “How Bitcoin’s Technology Could Revolutionize Intellectual Property Rights.”

There are already some startups providing blockchain services for intellectual property, such as Blockai and Ascribe .

image

And blockchain technology, in theory, could be beneficial for artists. But there are practical obstacles in making this happen and, in the long run, a fundamental flaw in the plan.

Let’s start with the practical question as to how this gets set up.
Here are just a few questions, off the top of my head. 

Who does it?  Without getting into the technical weeds here, there is also a question as to what characteristics a particular blockchain service would have – yes, there are options. How can the “platform” provider be reimbursed? Do you start from scratch or try to negotiate with agencies already serving related functions, like ASCAP? Who polices all this to make sure that the record established in the blockchain is actually being used to compensate artists?

image

The bigger issue is treating ideas or creativity as “intellectual
property” – in economics terms, as a private good, instead of a public good. As
we have learned, most inventions and creations are not the result of a
solo hermit genius, but are the result of direct or indirect
collaboration.  So this concept of the idea as private property of the
first person (or corporation) to claim it is debatable. New ideas and
creative works may be more public, than private, good.

As Thomas Jefferson, amateur scientist and political philosopher, said some time ago:

“He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”

I’m looking at this issue in a very pragmatic, non-ideological way. Simply, although technology may make it feasible to track ideas and government laws may try to put a private label on them, this often doesn’t work because it goes against the nature of the good. Some things might start out as private enterprises, but if they are in essence public goods, the private enterprise will fail.

Consider the development of mass transit in many cities, mostly which were franchises to private companies until those companies realized they couldn’t really make a profit in the business.

I’m not sure what the answer is to prevent artists from starving because they can’t live on the money they receive while being artists. Blockchain may have a role. But the solution will take more than just continuing to think about the problem in a fundamentally flawed way as the protection of private property.

If you’re not an artist, you need to understand this also affects you.  How people get enough income to live comfortably is an ever increasing problem in an age where an ever increasing number of people have to be creative, not just making music and art, but all kinds of ideas and works.

[Note: For a related blog post, see The Internet & The Battle Over Innovation.]

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/151670251222/blockchain-and-the-arts]

Free The Library

In our post-industrial, Internet world, an ever increasing percentage
of the population has an ever increasing need for knowledge to make a
living.  This is why people have used the Internet’s search engines so
much, despite being frequently frustrated by the volume and irrelevance
of search results.  They may also be suspicious of the bias and
commercialism built into the results.  Most of all, people intuitively
grasp that search results are not the same thing as the knowledge they
really want.

Thus, if I had to point to a single service that
would dramatically raise the economic importance of libraries in this
century, it would be satisfying this need in a substantive and objective
way.

Yet, if you go to most dictionaries, you’ll find a
definition of a library like this one from the Oxford Dictionary:

“A
building or room containing collections of books, periodicals, and
sometimes films and recorded music for people to read, borrow, or refer
to”.

While few people would say that libraries
shouldn’t provide books, as long as people want them, most librarians
would point to the many services they have provided beyond collecting
printed material.

Nevertheless, the traditional definition
continues to limit the way too many librarians think.  Even among those
who object to the narrow definition in the dictionary, these two
traditional assumptions about libraries are usually unquestioned:

  1. Library services are mostly delivered in a library building.
  2. Library services are mostly delivered by human beings.

My
argument here is simple:  If libraries are to meet the public needs of a
21st century knowledge economy, librarians must lift these self-imposed
constraints.  It is time to free the library and library services!

This
isn’t as radical as it sounds.  If we look deeper, more conceptually,
at what has gone on in libraries, libraries services are about the
community’s reserve of knowledge and sharing of information — and
helping members of the community find what they need quickly, accurately
and without bias.  I’m proposing nothing different, except expanding
the ways that libraries do this job.

The first of these two
assumptions is the simplest one to abandon.  Although the library
building remains the focus for many in the profession, in various ways,
virtual services are available through the web, chat, email or even Skype.  (I’ve written
before about the ways that library reference services could become
available anywhere and be much improved through a national
collaboration.)

image

The second assumption – the necessity for a human librarian at almost all points of service — will be a tougher one to discard.

Consider,
though, one of the most important of the emerging, disruptive
technologies – artificial intelligence and machine learning – which can
supplement and enhance the ability of librarians to deliver information
services well and at a scale appropriate for the large demand.

My hope is that, working with software and artificial intelligence experts, librarians
will start creating machine learning and artificial intelligence
services that will make in-depth, unbiased knowledge guidance and
information reference universally available.

Doing that
successfully as a national project will enable the library as an
institution, if not a building, to reclaim its role as information
central for people of all ages.

By the way, the use of artificial
intelligence in libraries is not a new idea.  In 1991, Charles W. Bailey
wrote an article titled “Intelligent Library Systems: Artificial Intelligence Technology and Library Automation Systems”.

During
the last several years, there have been a few experiments in using
artificial intelligence to supplement reference services provided by
human librarians.  In the UK, the University of Wolverhampton offers its
Learning & Information Services Chatbot”.

image

A few weeks ago, the Knight News Challenge selected the Charlotte Mecklenberg Public Library’s DALE project with IBM Watson and described it as “the first AI enabled search portal within a public library setting.”

In a note that is very much in accord with my argument, they wrote:

“Libraries
are the unsung heroes of the Information Age.  In a world where
everyone Googles for the right answer, many are unaware of the wealth of
information that libraries have within their physical and digital
collections.…  DALE would be able to analyze the structured and
unstructured data hidden within the public library’s vast collections,
helping both staff and customers locate the information needed within
one search setting.”

Despite the needs of library patrons, so far these examples are still rare for a couple of reasons.

Some
people argue that libraries shouldn’t and maybe can’t compete with the
big corporations, like Apple and Google, in helping people find the
knowledge they need.  As I’ve already noted above, many users experience
these commercial services as a poor substitute for what they want.

In
any case, abdicating its own responsibility is a disservice to library
patrons and the public who have looked to libraries for objective,
non-commercial information services for a very long time.

There is
also a fear that wider use of artificial intelligence to help provide
library services might put human librarians out of work.  While that is
not a concern that librarians generally discuss publicly, Steven Bell,
Associate University Librarian of Temple University, wrote last month in
Library Journal about this very subject – the potential for artificial
intelligence to diminish the need for librarians.  He called it the “Promise and Peril of AI for Academic Librarians”, although the article seemed to focus more on the peril.

This
is the fear of every worker faced with the onslaught of technology and
the resulting prospect of delivering more output in fewer hours.  With
artificial intelligence and related robotics, workers in industries
where demand is not accelerating – like cars – may very well have
something to worry about.

But the reality for librarians is
different.  The demand for information services is accelerating so that
even in the face of greater productivity per person, employment
prospects shouldn’t diminish.

Indeed, if these library services
become real and gain traction, increasing demand for them and for the
librarians that make them possible will also increase because the
knowledge creates a demand for new knowledge.  To use an ungainly and
somewhat distasteful analogy, it is like an arms race.

My concern
is neither about corporate competition nor unemployment.  Rather my fear
is that the library profession will not easily abandon its self-imposed
limitations and will not expand its presence and champion new
technology for its services.  If those limitations remain, the public –
having been forced to go elsewhere to meet their needs – will in the end
devalue and reduce their support for libraries.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/145256275028/free-the-library]

The Coding Craze

A computer coding craze has taken over the country. Everywhere you turn, public officials from President Obama on down in the US and around the world seem to be talking about the need to train folks in computer coding.

Governors
and Mayors are asking their school systems to teach students how to
code instead of learning other subjects. Many people who had little
previous interest in computers or software – except as blissfully
ignorant users – have signed up for, often expensive, courses on
programming.

image

It’s not just in California or Seattle or Austin
(pictured), but back in traditionally less high-tech places on the East
Coast as well. Recently there were stories about coding classes in the
Borough of the Bronx in New York City and as far south as Miami.

There
may be good reasons to take these courses. A bit like the courses that
schools used to teach about how the car combustion engine worked,
learning to program may help people better understand how computers
sometimes operate.

As with any creative activity, at the start,
you can get a sense of accomplishment when witnessing your software
creation come to life — once most of the bugs are eliminated 🙂 You can
even reprise this feeling under special circumstances later on in your
career. But much of the work of coders can, in the long-run, become mind
numbing.

The opportunity to design and create a great new app is
like being invited to paint the Sistine Chapel. But the more frequent
opportunities are like being invited to paint someone’s apartment.

Don’t
get me wrong. As a long time software developer myself, I can say there
are many satisfactions for developers who have both the knack and
passion for software. But people who don’t have those attributes and
just do it like any other job will be frustrated too easily.

And, honestly, even the positive side of life as a developer is not what is primarily driving this coding craze.

Much
of the interest by public officials (like the governor of Arkansas in
the picture) as well as the people enrolled in coding classes is based
on their belief that these courses make possible employment
opportunities that will endure for decades in a world in which
traditional jobs have been automated or shipped overseas. Will it?

image

Surely, some people are going into coding just for an immediate bump in short-term income. Studies
on the relatively new phenomenon of coding bootcamps seem to support
this notion – that is for the 65% of students who graduated and are
working in a programming job. Even in those cases, the best results were
for students who graduated from the more expensive and selective
programs.

Yet, on balance, count me as a skeptic. I think this
craze is, well, crazy. In the long run, I don’t think coding courses for
the millions will lead to the affluent future and lifelong careers that
many proponents envision.

First, as I’ve alluded to, these are
not jobs that everyone who is learning to code will find satisfying. We
may be too early into this craze to know how many people go into the
field and last for more than a short while, but I’d expect the dropout
rate to be high.

Second, there is the low level nature of what is being taught – how to write instructions in a currently fashionable language. While most of the coding courses focus on currently popular languages, like Ruby and Javascript, many of their students do not understand how popularity in languages can come and go quickly.

Some
languages last longer than others do, of course. Through sheer inertia
and unwillingness to invest, there are still some existing programs
written in old computer languages, like FORTRAN and COBOL. But there
aren’t that many job openings for people coding those old languages.

Wikipedia lists a variety of languages that have been created over the last three decades, approximately one a year:

1980 C++
1983 Ada
1984 Common Lisp
1984 MATLAB
1985 Eiffel
1986 Erlang
1987 Perl
1988 Tcl
1988 Mathematica
1990 Haskell
1991 Python
1991 Visual Basic
1993 Ruby
1993 Lua
1994 CLOS (part of Common Lisp)
1995 Ada 95
1995 Java
1995 Delphi (Object Pascal)
1995 JavaScript
1995 PHP
1996 WebDNA
1997 Rebol
1999 D
2000 ActionScript
2001 C#
2001 Visual Basic .NET
2003 Groovy
2003 Scala
2005 F#
2009 Go
2011 Dart
2014 Swift
2015 Rust
2016 ???

So if all they learn today is the syntax of one language and lack a
deeper education, they may find that one skill to fall out of favor.

Indeed,
many of those students aren’t even being taught about the different
kinds of programming languages – even classes of languages vary in
popularity over time.

Instead, they are usually learning imperative languages, especially with a focus on low level procedures.

It
is also not clear that the popular languages are the best ones to even
teach basic coding, never mind understand software more generally.

Even
the idea that any language is good enough to educate students about how
computers work is misleading. Different classes of languages lead to
different ways of thinking how we can represent the world and instruct
computers.

And finally, the trend in software, in fits and starts,
has been to reduce the need for low-level programming. Originally, it
was a move away from “machine instructions” to higher level languages.
Then there were various tools for rapid application development. Today,
there is the Low-Code or even No-Code movement, especially for Apps.

You’ve
heard of the App Economy, another part of the promised job future?
Putting aside the debate as to whether the app phenomenon has already
peaked, with these low-code tools, fewer coders will be needed to churn
out the same number of apps as in the past.

And then over the horizon, computer scientists have been busy “Pushing the Limits of Self-Programming Artificial Intelligence” as one article states in its title.

Finally,
with this background, pure coding itself, even in past years, was only a
small part of what made software successful. And a successful long term
career in software requires an understanding of what goes beyond
coding.

But this is enough in one post to get many people irked, so I’ll save that for a future post.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/141088761737/the-coding-craze]

New Funding For Libraries

The Metropolitan New York Library Council was invited to take part today in a working conference of the New York State Assembly Standing Committee on Libraries and Information Technology on the digital divide, broadband and especially library funding.  We — Nate Hill, Executive Director, and myself as Board president — took the opportunity to address the large and developing problem of how to fund libraries in this century.

We noted that these subjects are all part of a larger problem.  Libraries are delivering more and more digital content and services to larger numbers of people, especially those who are on the wrong side of the digital divide or who still need help navigating the digital economy.  These increased services require much higher bandwidth than most libraries can now offer, which puts an unfair and arbitrary cap on how well people can be served.

image

While the need for broadband in libraries and its value to the community is clear, what has been unclear and, at best, sporadic is the financing to make the broadband-based services possible.  When legislators only thought about libraries as just another one of the cultural resources for the state, library funding was limited to a piece of cultural funding.

Now that libraries offer a broader array of services and can offer even more in a digital broadband era, the funding should also be more diversified. 

• To the extent libraries support entrepreneurs and small business as both location for innovation and “corporate reference librarian”, a piece of the economic development budget should support libraries.• To the extent libraries support students, especially with homework help and after school resources, a piece of the very large education budget should support libraries.• To the extent libraries support workforce development and are the most cost-effective, often the only, way that adult learners can keep up their skills to be employable, a piece of the workforce development and public assistance budgets should support libraries.• To the extent libraries support public health education, a piece of the health budget should support libraries.

There are other examples, but the strategy is clear.  Library funding needs to come from a diverse set of sources, just as a good investor has a balanced portfolio and doesn’t have all the money in one stock.

Of course, in the longer run, public officials will recognize the role of the library as the central non-commercial institution of the knowledge age that we are entering.  As such, perhaps the permanent funding of libraries should be a very light tax on the commerce going to through the Internet to support the digital public services that are provided by libraries.

To some degree, the principle of basing support for library broadband on telecommunications revenues has been established with the Federal E-Rate program.  But the amounts are relatively small and the telecommunications base is traditional phone service, which is diminishing, not the Internet which is growing.

Whatever the source of funding may turn out to be, libraries need a consistent source of funding that grows with the demand for their services in this century.

image

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/140390575384/new-funding-for-libraries]

Broadband And An Open Internet

Six US Senators, mostly from small rural states, wrote recently to the FCC about the inconsistencies they found between its recent report on broadband progress and its Open Internet order that was issued last March.

The
FCC’s stated:

“We find that advanced telecommunications capability is
not being deployed to all Americans in a reasonable and timely fashion…
… many Americans still lack access to advanced telecommunications
capability, especially in rural areas… the disparity between rural and
urban Americans persists”.

The Senators:

  • objected to the FCC’s view that broadband is not being deployed fast enough
  • expressed their “concern” that the FCC’s broadband benchmark (25 Mpbs download and 3 Mbps upload speeds) “discourages broadband providers from offering speeds at or above [that] benchmark.”
  • pointed out difference in broadband definitions between the Open Internet proposal and the broadband report
  • questioned why the Connect American fund only subsidizes rural broadband at speeds of 10 Mbps download and 1 Mbps upload.
image

This post is not primarily about the issue of net neutrality, as important as that is. Instead, hopefully I’m giving an objective, third party view of this debate about broadband from the perspective of the Intelligent Community Forum’s more than fifteen years of working with communities around the world and seeing what level and kind of broadband they need.

As might be expected, both sides of this dispute are somewhat off the mark.

Despite the progress that is being made in some parts of the USA by private companies or municipal agencies, the FCC’s statement that broadband is not being deployed in a timely fashion is essentially correct.

image

The Senators’ assertion that maintaining a 25/3 broadband benchmark discourage telecommunications companies and other Internet Service Providers from delivering more than this minimum benchmark does not make a lot of sense and is not supported by any evidence. Our observation is that, in areas where these companies feel under competitive threat, they manage to find the money to invest in upgrading speeds on their networks.

It’s also worth pointing out that the speeds that are promised by ISPs are seldom delivered, as anyone who has used Speedtest or similar services can attest. This reality seems unrecognized by both the Senators and the FCC.

The focus of the FCC and the Senators on download speeds ignores the need for upload speeds, especially for those who want to use broadband for business, health care and education. In some respects, it is best to look at the combination of upload and download speeds. The FCC’s discussion about fairness to big content providers might have misled them into thinking mostly about delivery of content from a central source and not to consider the world we have, where people are both consumers and producers of content.

The Senators’ statement that they are unaware of any application needing 25 Mbps ignores the demands of even the near term future. Broadband projects, according to the telecommunications companies, are major investments — presumably made to meet the needs of more than the next six months.

There was a time perhaps a decade ago when people couldn’t figure out why they needed more than dial up speeds. Now they know and demand broadband. The FCC, the Senators and telecommunications companies all need to realize that even speeds that are above today will seem way too slow for the applications that are coming in a few years.

The Senators are correct that there is no good public policy reason to accept different broadband speeds for urban versus rural areas. Our work with rural areas, if anything, would lead us to believe that the reverse is true. Those in urban areas can still seek out a large number of customers and business partners the old fashioned way, in person. To succeed in the global economy today, those living in rural areas need higher speeds to connect with people far away.

Although the Senators brought together the FCC’s Open Internet policy and broadband assessment to criticize the FCC, there is an interplay between net neutrality (the FCC’s Open Internet) and broadband which goes beyond the FCC’s contradictory statements. Simply, if the bandwidth is sufficient, then there would be less reason to throttle any consumer or content provider — and thus less reason for concern about how Internet service providers could be hurt by Open Internet requirements.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/139911411379/broadband-and-an-open-internet]

A National Library Service

Readers of this blog are aware that libraries have continued to adapt
to our evolving digital era. Most provide electronic materials, teach
people about digital literacy, help overcome the digital divide by
making Wi-Fi and even devices freely available, etc.  Some even support platforms for self-publishing.

But
libraries have yet to collectively take advantage of the Internet as a
national (even global) network that connects them all. This is
especially true for reference librarians.

This is not really a new
idea. There are a few examples of collaboration within state borders,
such as Florida’s Ask A Librarian.

image

More than ten years ago, going
beyond one state to the whole nation, the Library of Congress helped
create the QuestionPoint service to provide

“libraries with access to a
growing collaborative network of reference librarians in the United
States and around the world. Library patrons can submit questions at any
time of the day or night through their library’s Web site. The
questions will be answered online by qualified library staff from the
patron’s own library or may be forwarded to a participating library
around the world.”

Although it’s now part of OCLC, QuestionPoint
certainly hasn’t grown in use as the Internet has grown in use. Indeed,
the movement for collaboration among libraries seems to have peaked
perhaps ten years ago. This is despite the fact that the demand for
these services has increased, while the tools to meet that demand have
become less expensive. The tools for collaboration – from social media
to videoconferencing – have vastly improved and become more common
recently.

In a world where everyone is drowning in a sea of
information, reference librarians have a unique and valuable role as
guides – the captains of the pilot boats on that sea. However, without
collaboration, every library would somehow have to have reference
librarians on staff who can quickly be expert on all matters. That’s
clearly impossible and no library can do the job adequately all alone.

But,
in its last report on employment, the American Library Association
reported that the USA has 70,000+ paid librarians and 150,000+ other
library staff. Imagine the impact if they worked together and
collaborated, each person specializing in some – but not all – subjects.
Each of these specialist reference librarians, networked together,
would be available to patrons everywhere in the country.

In this way, collaboration through the Internet would enable each library to:

  • Promote economies of scale, both becoming more cost-effective and more valuable
  • Broaden the library’s resource reach to better serve its local residents

Maureen
Sullivan, Past President of the American Library Association and my
colleague in the Aspen Institute’s working group on libraries, has
stated the situation clearly:

“With a nationally networked
platform, library and other leaders will also have more capacity to
think about the work they can do at the national level that so many
libraries have been so effective at doing at the state and local levels”

Jim
Neal, former head of the Columbia University Libraries, current member
of the board of the Metropolitan New York Library Council and hopefully
the next President of the American Library Association, wrote four years
ago wrote an article whose message was clear “Advancing From Kumbaya to Radical Collaboration: Redefining the Future Research Library”. While his focus was on research libraries, his call for radical collaboration should be heard by all libraries.

With
that in mind, Ronna C. Memer of the San Jose (California) Public
Library, reflecting on her 25 years of librarianship, wrote in a 2011
issue of the Collaborative Librarianship journal:

“Although some
collaborative efforts have recently been curtailed due to rising costs,
it seems that more rather than less collaboration would be most
cost-effective to library systems in coming years. As distinctions
between types of library services (e.g. online vs. face-to-face)
diminish, so too do some distinctions between types of library systems
(e.g. academic vs. public) as well as between library systems and other
institutions such as museums. Libraries, their staff and their patrons
all benefit from creative sharing of library resources and services.”

Other
fields of endeavor have created national networks. For example, the
National Agricultural and Rural Development Policy (NARDeP) Center is a
“flexible national network of scientists and analysts ready to quickly
meet the needs of local, state, and federal policy makers.”

Certainly, all libraries – networked together – can do the same thing for the residents of their communities.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/139429516062/a-national-library-service]

The New Urban Exodus

During the last half of the last century, there was much concern
about people leaving cities. Several books explored the phenomenon,
including one titled “Urban Exodus”, a phrase that became almost a rallying cry for urbanists and urban planners.

Many of the cities – even the largest ones – lost population as people moved out to the suburbs and even further to ex-urbs.

Then
over the last decade or two, with a general decrease in crime and
the arrival of both immigrants and young people who grew up elsewhere,
many cities – although certainly not all – were re-populated and turned
around. We read many stories about how these new arrivals are bringing a
new vibrancy to urban areas. And we frequently hear about the various
predictions of the continuing urbanization of the world’s population.

Of course, rural life continues to have its attractive qualities for some people. So, about two years ago, I asked “Will The Best & Brightest Return To The Countryside?
That blog post even had a reference to a New York Times story about
older folks returning to rural life after business careers elsewhere – “A Second Career, Happily in the Weeds”.

A necessary youth movement among farmers has also developed. In western Canada, there is the Young Agrarians
whose motto is “growing the next generation of farmers and food lovers
in Canada”. On the other side of the continent, the Virtual Grange has
run young farmers conferences.

Even more recently, with the
diffusion of information and communications technology, there has
started a new urban exodus, with a more significant twist. This exodus
is not about the movement to the suburbs of middle class families, whose
breadwinners work for large corporations. Rather it is about creative
folks, artists, and others from cities, going past the suburbs, to live
in rural areas where they can practice their craft and/or become
farmers.

The magazine, Modern Farmer, was established to serve
this group of people – in its own special hipster way that would
otherwise be associated with parts of San Francisco or Brooklyn. In a
recent issue, the magazine contained an article “At Home with Jacob and Alissa Hessler of Urban Exodus”.

image

The appropriately named Urban Exodus website describes its founders and cohorts this way:

“The
new age of back to landers. Urban Exodus gives an intimate glimpse into
the spaces and lives of creative urbanites who chose to leave the
concrete jungle for greener pastures. In addition to the idyllic imagery
of rustic farmhouses, working studios and cabins nestled in the woods,
are interviews detailing their journey. These interviews highlight the
triumphs and the struggles they have experienced and the inspirations
they have found since choosing to live a life away from the urban
existences they once knew.“

It’s filled with stories about people like those pictured here:

image

Nor is this just a North American phenomenon. William van den Broek wrote about the same situation in France:

“Many
cities of the world are facing an unexpected phenomenon: urban exodus.
No longer constrained by a localized workspace, an increasing number of
freelancers are enjoying mobility, and ultimately leaving stressful and
polluted cities. After the rural exodus, following the industrial
revolution, are we now facing a digital urban exodus. Perhaps this
movement is now following the digital revolution?”

Any social
trend is complex, especially in the world today. So, for some people,
it’s not a matter of taking leave from the city, but living in both the
city and the country – a combination that technology also makes
possible.  I mentioned Brooklyn before as one of the centers of the
young creatives on the East Coast. Among those splitting their time in
city and country are young Brooklyn families who maintain residences
both in Brooklyn’s urban core and more than a hundred miles away in the
rural parts of the Hudson Valley and Catskills Mountains.

Unfortunately,
rural areas on the wrong side of the digital divide will not be very
inviting to this potential influx of sophisticated folks because those
areas lack the required connectivity to the rest of the world.
Nevertheless, with the creative and idealistic people behind it, this
new urban exodus is very much worth watching.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/139052799904/the-new-urban-exodus]

Getting Us Closer?

When we look at the adoption of new technologies, there often seem to
be two simultaneous divergent trends. The innovators and early adopters
push the technology forward, making significant progress every year.
The laggards still find many reasons not to use the technology.

The current state of videoconferencing provides a very strong example of this divergence.

While
videoconferencing has been steadily increasing in the corporate world,
it hasn’t really taken off. Each year, we see new predictions that this
next year videoconferencing will be unavoidable.

The obstacles to widespread adoption of videoconferencing in the past included:

  • Cost – which has decreased dramatically over the last few years
  • Quality
    — the need for high broadband, low latency on both sides of the
    conversation, which gets better as bandwidth has generally increased
  • Sunk
    costs that make people wary of investing more money — one estimate is
    that more than half of businesses have outdated hardware
  • And, as always, human resistance or impediments to change of any kind.

In
recent years, consumers have tended to adopt new technologies faster
than big corporations do. But reliable data about usage of consumer
video, like Skype Video or Apple FaceTime, is not readily available.

Nevertheless, the technology is moving forward with some interesting results.

Two weeks ago, Skype celebrated ten years of video calls by offering group mobile video conferencing.

Using through-the-screen-camera and a holographic illusion, DVETelepresence
has worked to make videoconferences appear more natural to
participants. This picture is one of my favorites. You’ll notice that to
enhance the illusion they even embed the office plant on both sides of
the screen, as if it really is to the side of the people who are remote.

image

Last week, 4Dpresence, a spinoff of DVETelepresence, announced
the availability of their “holographic town hall” for political
candidates and issues. Taking a page from India’s Prime Minister, who
used videoconferences to appear all over that country during their last
election, this company is offering to host candidates who can appear as
if they are live holograms and interact with audiences. The company
claims:

“In live venues, the patented holographic augmented
reality podium is so bright the candidates appear more compelling than
actually being there in person. The candidates and citizens engage each
other naturally as if they are together in person.”

You can see a video on their website.

Personify
offers what they call “Video Conversation, With a Hint of
Teleportation”. The idea is to eliminate the background that an Intel
RealSense 3D camera or a Primesense Carmine 3D cameravideo camera is
picking up so that you and the people you’re talking to all seem to
share the same virtual space.

Another version of teleportation for videoconferencing was featured a few months ago in a Wall St. Journal article titled “The Future of Remote Work Feels Like Teleportation: Virtual-reality headsets, 3-D cameras help make videoconferencing immersive”. As its author wrote:

“I
have experienced the future of remote work, and it feels a lot like
teleportation. Whether I was in a conference room studded with monitors,
on a video-chat system that leverages 3-D cameras, or strapped into a
virtual-reality headset inhabiting the body of a robot, I kept having
the same feeling over and over again: I was there — where collaboration
needed to happen.”  

The article focused especially on the use of virtual reality gear to achieve this effect. There is DORA from the University of Pennsylvania, in which a person uses the VR headset to see through the eyes of a mobile robot.

This month’s MIT Technology Review also highlighted the use of Microsoft’s Room Alive in an article
titled “Can Augmented Reality Make Remote Communication Feel More
Intimate? A Microsoft Research study uses augmented reality to project a
life-size person into a room with you, perching them in an empty seat.”

image

Eventually, as the technology gets ever more interesting and
intimate, some fraction of the laggards may finally adopt the new
technology. Although as Max Planck noted about scientific progress, the
adoption pattern may just be generational: “A new scientific truth does
not triumph by convincing its opponents and making them see the light,
but rather because its opponents eventually die, and a new generation
grows up that is familiar with it.”

So it’s interesting that “47% of US teens use video chat including Skype, Oovoo, Facetime and Omegle.”

In
the meantime, the early adopters are getting all the economic and
intellectual benefits that can only occur with the full communication
that videoconferencing provides and texting/emails don’t. These people
are literally seeing the real potential of global Internet
communications and will likely reap the economic gains from realizing
that potential.

image

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/138040698302/getting-us-closer]

Talk To Anyone In Any Language?

It’s been clear for some time that the Internet can connect everyone
around the globe – in theory. This opens up tremendous potential for
collaboration, mutual economic growth, education and a variety of other
benefits. We’ve seen many of those benefits, but we still haven’t
touched the surface.

Among other reasons the true potential of a
globally connected world hasn’t yet been realized is that many people
still can’t communicate when they communicate – they don’t speak the
same language.

So it has been interesting to me to see the recent
improvements in real time translation on the Internet. I’m not talking
about the translation of text that has been around for a couple of years
through, for example, Google Translate of websites or even the very
useful app, WordLens, which I have used in my travels when I had to read
foreign signs.

No, the new improvements are in speech – taking
speech from one language and ultimately, quickly converting it correctly
into another language. Although text translation is not easy, speech
introduces much greater challenges.

These new real-time voice
translation services and devices aren’t perfect, but they’ve improved
enough that they are usable. And that usability will begin to make all
the difference.

Last year, Google took its Translate app into speech. You can see a quick video example here. Google claims it can handle 90 of the world’s languages.

Then,
more recently, Skype made its Translator generally available, although
it’s clearly still in a sort of test mode. For English, Spanish, French,
German, Italian and Mandarin, Skype describes its capabilities quite simply:

“You
can call almost anyone who has Skype. It will translate your
conversation into another language in near real-time. What someone else
says is translated back in your language. An on-screen transcript of
your call is displayed.”

image

They have a charming video of school children in the US and Mexico talking to each other somewhat awkwardly.

There’s another video, titled “Speak Chinese Like A Local” with an American photojournalist in China arranging a tour for himself.

This translation work hasn’t only be done in the US. The Japanese have also been busy at this task, in their own way.

While not using the Internet, a Panasonic translator – in the form of a smart megaphone – will be tested at Narita Airport to translate between Japanese, Chinese, Korean and English.

Then there’s the “ili”,
a portable device (also not connected to the Internet) which translates
between Japanese, Chinese and English. The company describes it as “the
world’s first wearable translator for travelers”. They’ve posted a
video at https://www.youtube.com/watch?v=B6ngM0LHxuU. The video is a strange combination of cute and creepy, but it gets the point across.

These developments have led some stories
to proclaim the arrival of the universal translator of Star Trek. But
as Trekkie experts say, unlike the one in Star Trek, this doesn’t read brains, which may have been a necessity to communicate with non-human species.  

On
the other hand, if you only want to talk with other people, the new
language translators are pretty good substitutes 😉 With more use, they
can only get better, faster, all the while helping to improve
understanding between people around the world.

image

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/137156470627/talk-to-anyone-in-any-language]

Urban-Rural Interdependency

Much of the discussion about economic growth and the availability of
broadband assumes there is a vast gulf between rural and urban areas.
I’ve written before about how, in some ways, trends in this century seem to be leading to something of a convergence of rural and urban areas.

So
I thought it especially interesting that the NTCA–The Rural Broadband
Association yesterday hosted a policy meeting in the US Capitol that was
titled: “Beyond Rural Walls: Identifying Impacts and Interdependencies
Among Rural and Urban Spaces”.

image

I was there for the panel
discussion, along with Professor Sharon Strover of the College of
Communication at University of Texas in Austin and Professor Charles
Fluharty of the Department of Health Management and Policy at the
University of Iowa (who is also the CEO of the Rural Policy Research
Institute).

We covered the changing demographics and ambiguities
in the boundaries between urban and rural, broadband deployment and
adoption, and how to measure both the interdependencies between these
areas as well as the impact of broadband communications. Perhaps there
were too many knotty issues for one morning!

Since the NTCA will be making available further information about this, I’m now just going to highlight my own observations.

There
are many examples of rural communities using broadband in innovative
and intelligent ways. One example is the work of the counties in
Appalachian Kentucky, one of the poorest parts of the US.

But most
of these communities don’t know about each other, which means that each
has to re-invent the wheel instead of learning from others’ experience
and experiments. That’s one reason ICF is planning a global virtual
summit for these communities.

The limited distribution of this
news also encourages major national/global philanthropic foundations to
give up hope for rural areas in the US. Dr. Fluharty noted that less
than five percent of philanthropy goes to American rural areas, although
twenty percent of the population lives there.

He also emphasized
that doing something about rural broadband and development is a national
issue, not something to be merely dealt with locally. He even
classified it as a national security issue because the countryside holds
so much of the country’s critical resources – our food, not the least.

The
problem is that for many national leaders, especially members of
Congress, the mental image of the countryside is of past decline and
abandonment. The national media reinforce that image. So they may feel
it’s a hopeless problem and/or have no idea what might be happening that
ought to be encouraged.

Many of our current national leaders also
have forgotten the common understanding of the founders of the USA that
a large country would only succeed if it was brought together. That’s
why building postal roads is one of the few specific responsibilities
given to Congress in the constitution. It’s why the Erie Canal was
built, the Land Grant colleges, etc. We seem to have forgotten what led
to our success. In this century, physical roads aren’t enough. Digital
communications are just as important.

Of course, not all public
officials are oblivious. There was a keynote by Lisa Mensah, Under
Secretary for Rural Development of the US Department of Agriculture.

Representative
Bill Johnson (Republican of Ohio’s 6th District) opened the conference
with a statement about the importance of rural broadband for urban
economies. Senator Al Franken of Minnesota closed the conference by
saying he viewed rural broadband in the same way people viewed rural
electrification decades ago – a basic necessity and common right of the
American people. Or, as he said “A no-brainer”.

Along with these
misperceptions on the part of media, national officials and foundations
is the failure to recognize the increasing integration of rural and
urban areas. The boundaries are getting fuzzy.

Even residence is
no longer clear. There are an increasing number of people – especially
knowledge workers and creative folks – who may spend 3-4 days a week in a
city and 3-4 days a week in the countryside. They may contact you, via
broadband Internet, and you won’t know which location they’re in. Are
they rural residents or urban residents or is that an increasingly
meaningless question?

Finally, in the question-and-answer part of
the conference, one of the many operators of rural communications
companies there pointed out that they know how to deploy broadband and
run it, but that their communities need help figuring out what to do
with it. Of course, that provided me an opportunity to discuss ICF’s
accelerator program and workshops that help community leaders do exactly
that.

image

© 2015 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/133528658560/urban-rural-interdependency]

The Virtual City-State Of All Ohio?

Last week, I attended the conference that launched the new Global Institute for the Study of the Intelligent Community, based in Dublin, Ohio.

image

In the annual evaluation by the Intelligent Community Forum (ICF),
Dublin has been among the most intelligent communities in the world for
the last few years. Nearby Columbus, Ohio was designated the most
intelligent community this past June.

The institute will share
innovations and best practices to help make communities more prosperous,
livable, resilient and intelligent. Although using broadband and
technology is a part of the story, the institute is part of the ICF
movement which has distinguished itself by its emphasis as well on all
the other factors that make a community intelligent. As such, the effort
to become an intelligent community involves all elements of a
community, not just technologists. Much of the discussion encouraged
leaders from Dublin, Columbus and other places in Ohio to think about
what a successful intelligent community means and how to measure it.

Dana
McDaniel, who had been in charge of Dublin’s economic development
strategy and is now its city manager, organized and led the conference.
In the moments when people had a chance to outline their longer term
vision, he had an intriguing thought. He wants to unify and treat Ohio
as the first intelligent community that encompasses a whole state.

This
reminded me of my work on technology-based economic development in
Massachusetts a few years ago. Massachusetts’ problem was that the
Boston/Cambridge area of the state was its primary economic engine, but
that the rest of the state, especially the central region, had suffered
economically.

Several states have a similar situation with only
one truly prosperous region. New York, Illinois, Colorado and Washington
are reasonably good examples of the problem.

So we came up with a
plan that would use broadband connectivity to link the rest of
Massachusetts to the Boston area. We knew this might be fraught with
political objections from other parts of the state not wanting to lose
their identity by being considered virtual suburbs of Boston.

Instead,
we were trying to find a way to link together the whole state. This
would not only provide resources and potential financing from Boston for
those elsewhere, but just as important it could provide people in
Boston with new entrepreneurial ideas that could only flourish in areas
with a different business atmosphere.

While it certainly has
pockets of relative affluence and poverty, Ohio is actually not one of
those states with a single economic engine. Despite that – or maybe
because of that – the idea of weaving together all the communities in a
state is germinating there.

By contrast, some states that do have
the problem of a concentration of prosperity seem to be going in the
opposite direction – splitting themselves into non-cooperating regions,
thus diminishing the state’s overall impact and putting every region in a
weaker competitive position.

I’ve noted before that
communications technology today makes possible a virtual metropolis
created through the linked combination of rural areas.

Ohio’s
variation on the theme is also an interesting development to watch. It
may well position Ohio as the forerunner for economic growth for the
rest of the USA – a 21st century virtual version of the economically
dominant city-states of the European Renaissance.

image

© 2015 Norman Jacknis, All Rights Reserved
[http://njacknis.tumblr.com/post/132017256455/the-virtual-city-state-of-all-ohio]