Leveling The Playing Field?

This past week started the COVID-postponed Intelligent Community Forum’s Annual Summit – now virtual and continuing over two weeks.  As usual as Senior Fellow at ICF, I made a presentation yesterday and led a workshop on “Bringing Broadband To Your Community”.

I have previously reported on what is happening in cities this year. In the face of COVID-inspired video conferencing and the departure from offices and some previously popular cities, the question is raised again – can we level the playing field again between the biggest metropolises and elsewhere in the US that have not had broadband?

Many communities now recognize that they will be completely left out of a post-COVID economy.  They are hoping that some outside organization – a benevolent telecommunications company or some government agency – will come in and make the necessary investment so that their community has the broadband it needs.

Considering how many politicians have included broadband as a basic part of our infrastructure, it may be possible that at least the government will provide a lot of funding next year.  But it is worth noting that talk about the government investing on broadband is not new and not all that much has happened in the past.

So in my presentation at the ICF summit, I drew attention to some examples of communities that just went ahead and built this for themselves.  You may have already heard of Chattanooga, Tennessee and Lafayette, Louisiana, both of which deployed broadband through their electric utilities that are owned by the city government.

But here I want to give some credit to two examples that are not so well known.  The first is in a poorly served urban community in San Francisco.  The second is in a rural area that had expected to be the last to get broadband in England.

Although San Francisco bills itself as the high-tech capital of the world, the reality is that 100,000 of its residents (1 in 8) do not have a high-speed Internet connection at home.  This situation, by the way, is not unique to San Francisco.  Many otherwise well-connected cities have vast areas without affordable broadband – not quite Internet deserts, but with Internet effectively out of reach to low income residents for technical or financial reasons.

So in conjunction with an urban wireless Internet provider, Monkeybrains (great name!), the city government rolled out its Fiber to Housing initiative last year.  According to a report “Can San Francisco Finally Close its Digital Divide?” in November 2019, they had already free, high-speed internet to more than 1,500 low-income families in 13 housing communities – public housing.  By this past summer, the number was increased to 3,500 families.  While there is still a long way to go, the competition has already forced traditional Internet service providers to step up their game as well.

In a very different community in rural England, there is a related story, except this region, unlike San Francisco, is the last place you would expect to find broadband.  In the northwest corner of England, surrounding the not-so-big city of Lancaster (population around 50,000), a non-profit community benefit society was created to provide broadband for the rural north.  It is called B4RN.

As they proclaim on their website, they offer “The World’s Fastest Rural Broadband [with] Gigabit full fibre broadband costing households just £30/month”.  As of the middle of last year, they had more than 6,000 fully connected rural households.

In speaking with Barry Forde, CEO of B4RN, I learned a part of the story that should resonate with many others.  The community leaders who wanted to bring broadband to their area tried to explain to local farmers the process of building out a fiber network.  They noted that the technology costs of these networks are often dwarfed by the construction costs of digging in the ground to lay the fiber. The farmers then responded that digging holes was something they could do easily – they already had the equipment to dig holes for their farming!  With that repurposing of equipment, the project could move much more quickly and less expensively.

I can’t go into the whole story here, but this video gives a good summary of the vision and practical leadership that has made B4RN a success.

Frankly, if B4RN can do it, any community can do it.  Whether it’s in one of the most costly cities or in the remote countryside, a little creativity and community cooperation can make broadband possible.

And it need not be gigabit everywhere to start or having nothing at all.  Build what you can, get people to use it and the demand will grow to support upgrades.  An intelligent community grows step by step this way.

These were the important lessons of the ICF Summit yesterday.

© 2020 Norman Jacknis, All Rights Reserved

Going Full Uber

Today, something a little different, but not too different — it’s about one of the public policy implications of an important change in the economy that technology has enabled.

As we all know, the freelance and gig economy has been growing. According to a report this year from Upwork and the Freelancers Union, more than a third of the workforce is freelancing. Many of us make at least part of our living in the gig economy and most of the rest of us depend at least part of the time on people who are gig workers.

In California, there has been a movement to apply to gig workers some of the protections that were put in place for the fast-growing number of American industrial workers 80 to 100 years ago — minimum wage, a fixed work week, unemployment insurance, assistance due to workplace accidents and the like.

In response to California’s law that requires Uber and Lyft to reclassify its contractors as employees who are provided with employee benefits, the company proposed its own reform plan for the gig economy. Dara Khosrowshahi, Uber’s CEO, wrote an op-ed in the New York Times on August 10, 2020, titled “I Am the C.E.O. of Uber. Gig Workers Deserve Better. Gig workers want both flexibility and benefits — we support laws that could make  that possible.”

In it, he proposed:

“that gig economy companies be required to establish benefits funds which give workers cash that they can use for the benefits they want, like health insurance or paid time off. Independent workers in any state that passes this law could take money out for every hour of work they put in. All gig companies would be required to participate, so that workers can build up benefits even if they switch between apps.”

The New York Times columnist Shira Ovide followed up with a story titled “Uber’s Next Idea: A New Labor Law …Uber’s “third way” would offer its drivers flexibility plus some benefits. It’s not totally crazy.” Hmm, not totally crazy? That doesn’t sound like an endorsement, but it’s also not dismissive. Something has to be done to equalize the protections for them with employees, while giving them the flexibility that Uber advocates.

In line with their approach, Uber and similar companies are supporting California’s Proposition 22 on the ballot this November to get them out from under the State government’s push to treat their drivers as employees. Not surprisingly, many progressive and labor groups oppose Prop 22. This picture illustrates the concerns of the opponents:

But there is a larger question here beyond benefits and rights for gig workers because the change in the nature of employee-employer relationships has been as significant as the growth of the gig economy. With increasing automation and more coming with AI, de-unionization and frequent layoffs among other trends, frankly, a job is not what it used to be. Moreover, the situation is not likely to improve since the long-term loyalty between employer and employee that was common decades ago is generally rare now.

It’s time to realize that the economy – not just for freelancers and gig workers – has changed a lot since the Progressive and New Deal reaction to the excesses of corporations a hundred years ago. The gig rights debate seems to be too limited and too much based on last century thinking which is increasingly inappropriate for our technology-based economy. 

Putting aside the limitations of Proposition 22, why not take the general proposal for gig contractors that Khosrowshahi described in his NY Times piece and expand it?

Why not go full Uber! (Something Uber itself may not like, after all.)

What does that mean? Gig workers need a better contract and so do “employees”.

Any individual — whatever the label — who is providing a service to a company would have a contract with that company which clearly states adherence to government laws and regulations on: minimum payment per hour, extra payment for more than a certain number of hours of work per week, expenses incurred performing duties on behalf of the company, safety, discrimination, normal workers compensation for accidents that occur while working on behalf of the company, and the right to form any association (union) they wish.

Khosrowshahi emphasizes the freedom and control over their lives that gig workers have. OK, maybe it is time to give employees that same freedom.

That brings up the other current disparities between gig workers and employees, especially health insurance, sick/family/vacation leave and unemployment insurance which are tied to employment status. Gig/freelance workers need this as well, but it is also time to disassociate these benefits from the companies where people work — all in the cause of the freedom that Khosrowshahi promotes.

For example, the money companies used to spend on health insurance premiums and the like would now be paid directly to the employees. The employees would get their own health insurance and not be limited to the third insurance plans their company has pre-selected. Government options could also be offered for health insurance. (Similarly, gig or freelance workers could have those premiums built in to their contracts, at a minimum being the percentage of a full work week that they devote to the company.)

In this way, there would be no windfall for corporations after they would be relieved of paying benefits to employees. The shift can be done in a revenue/cost neutral way, leaving employers, companies and governments financially where they were before the shift.

Providing protections for everyone who works for someone else, no matter whether that’s on a gig/freelance basis or “permanently”, will help everyone get some more freedom from the fear of economic dislocation. Also, they will finally have the freedom to pursue their entrepreneurial dreams as well, which could help grow the economy more than forcing them to be locked into jobs that don’t fulfill their potential.

Finally, governments will, in the process, have to adjust their understanding of the nature of work in this century, which is no longer what it was when most current laws and policies were put in place.

© 2020 Norman Jacknis, All Rights Reserved

Are You Looking At The Wrong Part Of The Problem?

In business, we are frequently told that to build a successful company we have to find an answer to the customer’s problem. In government, the equivalent guidance to public officials is to solve the problems faced by constituents. This is good guidance, as far as it goes, except that we need to know what the problem really is before we can solve it.

Before those of us who are results-oriented, problem solvers jump into action, we need to make sure that we are looking at the right part of the problem. And that’s what Dan Heath’s new book, “Upstream: The Quest To Solve Problems Before They Happen” is all about.

Heath, along with his brother Chip, has brought us such useful books as “Made To Stick: Why Some Ideas Survive and Others Die” and “Switch: How to Change Things When Change Is Hard”.

As usual for a Heath book, it is well written and down to earth, but contains important concepts and research underneath the accessible writing.

He starts with a horrendous, if memorable, story about kids:

You and a friend are having a picnic by the side of a river. Suddenly you hear a shout from the direction of the water — a child is drowning. Without thinking, you both dive in, grab the child, and swim to shore. Before you can recover, you hear another child cry for help. You and your friend jump back in the river to rescue her as well. Then another struggling child drifts into sight…and another…and another. The two of you can barely keep up. Suddenly, you see your friend wading out of the water, seeming to leave you alone. “Where are you going?” you demand. Your friend answers, “I’m going upstream to tackle the guy who’s throwing all these kids in the water.”

 

Going upstream is necessary to solve the problem at its origin — hence the name of the book. The examples in the book range from important public, governmental problems to the problems of mid-sized businesses. While the most dramatic examples are about saving lives, the book is also useful for the less dramatic situations in business.

Heath’s theme is strongly, but politely, stated:

“So often we find ourselves reacting to problems, putting out fires, dealing with emergencies. We should shift our attention to preventing them.”

This reminds me of a less delicate reaction to this advice: “When you’re up to your waist in alligators, it’s hard to find time to drain the swamp”. And I often told my staff that unless you took some time to start draining the swamp, you are always going to be up to your waist in alligators.”

He elaborates and then asks a big question:

We put out fires. We deal with emergencies. We stay downstream, handling one problem after another, but we never make our way upstream to fix the systems that caused the problems. Firefighters extinguish flames in burning buildings, doctors treat patients with chronic illnesses, and call-center reps address customer complaints. But many fires, chronic illnesses, and customer complaints are preventable. So why do our efforts skew so heavily toward reaction rather than prevention?

His answer is that, in part, organizations have been designed to react — what I called some time ago the “inbox-outbox” view of a job. Get a problem, solve it, and then move to the next problem in the inbox.

Heath identifies three causes that lead people to focus downstream, not upstream where the real problem is.

  • Problem Blindness — “I don’t see the problem.”
  • A Lack of Ownership — “The problem isn’t mine to fix.”
  • Tunneling — “I can’t deal with the problem right now.”

In turn, these three primary causes lead to and are reinforced by a fatalistic attitude that bad things will happen and there is nothing you can do about that.

Ironically, success in fixing a problem downstream is often a mark of heroic achievement. Perhaps for that reason, people will jump in to own the emergency downstream, but there are fewer owners of the problem upstream.

…reactive efforts succeed when problems happen and they’re fixed. Preventive efforts succeed when nothing happens. Those who prevent problems get less recognition than those who “save the day” when the problem explodes in everyone’s faces.

Consider the all too common current retrospective on the Y2K problem. Since the problem didn’t turn out to be the disaster it could have been at the turn of the year 2000, some people have decided it wasn’t real after all. It was, but the issue was dealt with upstream by massive correction and replacement of out-of-date software.

Heath realizes that it is not simple for a leader with an upstream orientation to solve the problem there, rather than wait for the disaster downstream.

He asks leaders to first think about seven questions, which explores through many cases:

  • How will you get early warning of the problem?
  • How will you unite the right people to assess and solve the problem?
  • Where can you find a point of leverage?
  • Who will pay for what does not happen?
  • How will you change the system?
  • How will you know you’re succeeding?
  • How will you avoid doing harm?

Some of these questions and an understanding of what the upstream problem really is can start to be answered by the intelligent use of analytics. That too only complicates the issue for leaders, since an instinctive heroic reaction is much sexier than contemplating machine learning models and sexy usually beats out wisdom 🙂

Eventually Heath makes the argument that not only do we often focus on the wrong end of the problem, but that we think about the problem too simplistically. At that point in his argument, he introduces the necessity of systems thinking because, especially upstream, you may find a set of interrelated factors and not a simple one-way stream.

[To be continued in the next post.]

© 2020 Norman Jacknis, All Rights Reserved

Technology and Trust

A couple of weeks ago, along with the Intelligent Community Forum (ICF) co-founder, Robert Bell, I had the opportunity to be in a two-day discussion with the leaders of Tallinn, Estonia — via Zoom, of course. As part of ICF’s annual selection process for the most intelligent community of the year, the focus was on how and why they became an intelligent community.

They are doing many interesting things with technology both for e-government as well as more generally for the quality of life of their residents. One of their accomplishments, in particular, has laid the foundation for a few others — the strong digital identities (and associated digital signatures) that the Estonian government provides to their citizens. Among other things, this enables paperless city government transactions and interactions, online elections, COVID contact warnings along with protection/tracking of the use of personal data.

Most of the rest of the world, including the US, does not have strong, government-issued digital identities. The substitutes for that don’t come close — showing a driver’s license at a store in the US or using some third party logon.

Digital identities have also enabled an E-Residency program for non-Estonians, now used by more than 70,000 people around the world.

As they describe it, in this “new digital nation … E-Residency enables digital entrepreneurs to start and manage an EU-based company online … [with] a government-issued digital identity and status that provides access to Estonia’s transparent digital business environment”

This has also encouraged local economic growth because, as they say, “E-Residency allows digital entrepreneurs to manage business from anywhere, entirely online … to choose from a variety of trusted service providers that offer easy solutions for remote business administration.” The Tallinn city leaders also attribute the strength of a local innovation and startup ecosystem to this gathering of talent from around the world.

All this would be a great story, unusual in practice, although not unheard of in discussions among technologists — including this one. As impressive as that is, it was not what stood out most strongly in the discussion which was Tallinn’s unconventional perspective on the important issue of trust.

Trust among people is a well-known foundation for society and government in general. It is also essential for those who wish to lead change, especially the kind of changes that result from the innovations we are creating in this century.

I often hear various solutions to the problem of establishing trust through the use of better technology — in other words, the belief that technology can build trust.

In Tallinn’s successful experience with technology, cause-and-effect go more in the opposite direction. In Tallinn, successful technology is built on trust among people that had existed and is continually maintained regardless of technology.

While well-thought out good technology can also enhance trust to an extent, in Tallinn, trust comes first.

This is an important lesson to keep in mind for technologists who are going about changing the world and for government leaders who look on technology as some kind of magic wand.

More than once in our discussions, Tallinn’s leaders restated an old idea that preceded the birth of computers: few things are harder to earn and easier to lose than trust.

© 2020 Norman Jacknis, All Rights Reserved

Bitcoin & The New Freedom Of Monetary Policy

Every developing technology has the potential for unintended consequences.  Blockchain technology is an example.  Although there are many possible uses of blockchain as a generally trusted and useful distributed approach to storing data, its most visible application has been virtual or crypto-currencies, such as Bitcoin, Ethereum and Litecoin. These once-obscure crypto-currencies are on a collision course with another trend that in its own way is based on technology — mostly digital government-issued money.

Although there are many possible uses of blockchain as a generally trusted and useful distributed approach to storing data, its most visible application has been virtual or crypto-currencies, such as Bitcoin, Ethereum and Litecoin. These once-obscure crypto-currencies are on a collision course with another trend that in its own way is based on technology — mostly digital government-issued money.

In particular, another once-obscure idea about government money is also moving more into the mainstream — modern monetary theory (MMT), which I mentioned few weeks ago in my reference to Stephanie Kelton’s new book, “The Deficit Myth”. In doing a bit of follow up on the subject, I came across many articles that were critical of MMT. Some were from mainstream economists. Many more were from advocates of crypto-currencies, especially Bitcoiners.

Although I doubt that Professor Kelton would agree, many Bitcoiners feel that governments have been using MMT since the 1970s — merely printing money. They forget about the tax and policy stances that Kelton advocates.

Moreover, there is a significant difference in the attitude of public leaders when they think they are printing money versus borrowing it from large, powerful financial interests. James Carville, chief political strategist and guru for President Clinton famously said, “I used to think that if there was reincarnation, I wanted to come back as the president or the pope or as a .400 baseball hitter. But now I would like to come back as the bond market. You can intimidate everybody.”

For Bitcoiners, the battle is drawn and they do not like MMT. Here is just a sample of the headlines from the last year or so:

It is worth noting that MMT raises very challenging issues of governance. Who decides how much currency to issue? Who decides when there is too much currency? Who decides what government-issued money is spent on and to whom it goes? This is especially relevant in the US, where the central bank, the Federal Reserve, is at least in theory independent from elected leaders.

However, it also gives the government what may be a necessary tool to keep the economy moving during recessions, especially major downturns. Would a future dominated by cryptocurrencies, like Bitcoin, essentially tie the hands of the government in the face of an economic crisis? — just as the gold standard did during the Panic of 1893 and the Great Depression (until President Roosevelt suspended the convertibility of dollars into gold)?

This picture shows MMT as a faucet controlling the flow of money as the needs of the economy changes. If this were a picture of Bitcoin’s role, the faucet would be almost frozen, dripping a relatively fixed amount that is dependent upon Bitcoin mining.

Less often discussed is that cryptocurrencies, as a practical matter, also end up needing some governance. I am not going to get into the weeds on this, but you can start with “In Defense of Szabo’s Law, For a (Mostly) Non-Legal Crypto System”. The implication is that cryptocurrencies need some kind of rules and laws enforced by some people. Sounds like at least a little bit of government to me.

Putting that aside, if Bitcoin and/or other cryptocurrencies succeed in getting widespread adoption, then it would seem that they would limit the ability of governments to encourage or discourage economic growth through the issuance of money.

Of course, some officials do not seem to worry too much. This attitude is summed up in a European Parliament report, published in 2018.

Decentralised ledger technology has enabled cryptocurrencies to become a new form of money that is privately-issued, digital and that permits peer-to-peer transactions. However, the current volume of transactions in such cryptocurrencies is still too small to make them serious contenders to replace official currencies. 

Underlying this are two factors. First, cryptocurrencies do not perform the role of money well, because their value is very volatile and they are thus not very good stores of value. Second, cryptocurrencies are managed in ways that are very primitive compared to what modern currencies require.

These shortcomings might be corrected in the future to increase the popularity and reach of cryptocurrencies. However, those that manage currencies, in other words monetary policymakers, cannot be outside any societal system of checks and balances.

For cryptocurrencies to replace official money, they would have to conform to the institutional set up that monitors and evaluates those who have the power to manage money.

They do not seem to be too worried, do they? However, cryptocurrency might eventually derail the newfound freedom that government economic policy makers have realized they have through MMT.

As we have seen in the past, new technologies can suddenly grow very fast and blindside public officials. As Roy Amara, past president of The Institute for the Future, said, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”.

© 2020 Norman Jacknis, All Rights Reserved

The Second Wave Of Capital

I have been doing research about the future impact of artificial intelligence on the economy and the rest of our lives. With that in mind, I have been reading a variety of books by economists, technologists, and others.That is why I recently read “Capital and Ideology” by Thomas Piketty, the well-known French economist and author of the best-selling (if not well read) “Capital in the Twenty-First Century”. It contains a multi-national history of inequality, why it happened and why it has continued, mostly uninterrupted.

At more than 1100 pages, it is a tour de force of economics, history, politics and sociology. In considerable detail, for every proposition, he provides reasonable data analyses, which is why the book is so long. While there is a lot of additional detail in the book, many of the themes are not new, in part because of Piketty’s previous work.  As with his last book, much of the commentary on the new book is about income and wealth inequality.  This is obviously an important problem, although not one that I will discuss directly here.

Instead, although much of the focus of the book is on capital in the traditional sense of money and ownership of things, it was his two main observations about education – what economists call human capital – that stood out for me. The impact of a second wave and a second kind of capital is two-fold.

  1. Education And The US Economy

From the mid-nineteenth century until about a hundred years later, the American population had twice the educational level of people in Europe. And this was exactly the same period that the American economy surpassed the economies of the leading European countries. During the last several decades, the American population has fallen behind in education and this is the same time that their incomes have stagnated.  It is obviously difficult to tease out the effect of one factor like education, but clearly there is a big hint in these trends.

As Piketty writes in Chapter 11:

The key point here is that America’s educational lead would continue through much of the twentieth century. In 1900–1910, when Europeans were just reaching the point of universal primary schooling, the United States was already well on the way to generalized secondary education. In fact, rates of secondary schooling, defined as the percentage of children ages 12–17 (boys and girls) attending secondary schools, reached 30 percent in 1920, 40–50 percent in the 1930s, and nearly 80 percent in the late 1950s and early 1960s. In other words, by the end of World War II, the United States had come close to universal secondary education.

At the same time, the secondary schooling rate was just 20–30 percent in the United Kingdom and France and 40 percent in Germany. In all three countries, it is not until the 1980s that one finds secondary schooling rates of 80 percent, which the United States had achieved in the early 1960s. In Japan, by contrast, the catch-up was more rapid: the secondary schooling rate attained 60 percent in the 1950s and climbed above 80 percent in the late 1960s and early 1970s.

In the second Industrial Revolution it became essential for growing numbers of workers to be able to read and write and participate in production processes that required basic scientific knowledge, the ability to understand technical manuals, and so on.

That is how, in the period 1880–1960—first the United States and then Germany and Japan, newcomers to the international scene—gradually took the lead over the United Kingdom and France in the new industrial sectors. In the late nineteenth and early twentieth centuries, the United Kingdom and France were too confident of their lead and their superior power to take the full measure of the new educational challenge.

How did the United States, which pioneered universal access to primary and secondary education and which, until the turn of the twentieth century, was significantly more egalitarian than Europe in terms of income and wealth distribution, become the most inegalitarian country in the developed world after 1980—to the point where the very foundations of its previous success are now in danger? We will discover that the country’s educational trajectory—most notably the fact that its entry into the era of higher education was accompanied by a particularly extreme form of educational stratification—played a central role in this change.

In any case, as recently as the 1950s inequality in the United States was close to or below what one found in a country like France, while its productivity (and therefore standard of living) was twice as high. By contrast, in the 2010s, the United States has become much more inegalitarian while its lead in productivity has totally disappeared.

  1. The Political Competition Between Two Elites

By now, most Americans who follow politics understand that the Democratic Party has become the favorite of the educated elite, in addition to the votes from minority groups. This coalition completely reverses what had been true of educated voters in most of the last century, who were reliable Republican voters. In the process, the Democratic Party has lost much of its working-class base.

The Republicans have been the party of the economic elite, although since the 1970s some of the working-class have joined in, especially those reacting to increased immigration and civil rights movements.

What Piketty points out is that, in this transition, working-class and lower income people have decreased their political participation, especially voting. He thinks that is because these voters felt that the Democratic Party has been taken over by the educational elite and no longer speaks for them.

What many Americans may not have realized is that this same phenomenon has happened in other economically advanced democracies, such as the UK and France. Over the longer run, Piketty wonders whether such an electoral competition between parties both dominated by elites can be sustained – or whether the voiceless will seek violence or other undemocratic outlets for their political frustrations.

In Chapter 14, he notes that, at the same time that the USA has lost the edge arising from a better educated population, it and other advanced economies that have now matched or surpassed the American educational level, have elevated education to a position of political power.

We come now to what is surely the most striking evolution in the long run; namely, the transformation of the party of workers into the party of the educated.

Before turning to explanations, it is important to emphasize that the reversal of the educational cleavage is a very general phenomenon. What is more, it is a complete reversal, visible at all levels of the educational hierarchy. we find exactly the same profile—the higher the level of education, the less likely the left-wing vote—in all elections in this period, in survey after survey, without exception, and regardless of the ambient political climate. Specifically, the 1956 profile is repeated in 1958, 1962, 1965, and 1967.

Not until the 1970s and 1980s does the shape of the profile begin to flatten and then gradually reverse. The new norm emerges with greater and greater clarity as we move into the 2000s and 2010s. With the end of Soviet communism and bipolar confrontations over private property, the expansion of educational opportunity, and the rise of the “Brahmin left,” the political-ideological landscape was totally transformed.

Within a few years the platforms of left-wing parties that had advocated nationalization (especially in the United Kingdom and France), much to the dismay of the self-employed, had disappeared without being replaced by any clear alternative.

A dual-elite system emerged, with on one side, a “Brahmin left,” which attracted the votes of the highly educated, and on the other side, a “merchant right,” which continued to win more support from both highly paid and wealthier votes.

This clearly provides some context for what we have been seeing in recent elections.  And although he is not the first to highlight this trend, the evidence that he marshals is impressive.

Considering how much there is in the book, it is not likely anyone, including me, would agree with all of the analysis. In addition to the analysis, Piketty goes on to propose various changes in taxation and laws, which I will discuss in the context of other writers in a later blog. For now, I would only add that other economists have come to some of the same suggestions as Piketty, although they have completed a very different journey from his.

For example, Daniel Susskind in The End Of Work is concerned that a large number of people will not be able to make a living through paid work because of artificial intelligence. The few who do get paid and those who own the robots and AI systems will become even richer at most everyone else becomes poorer. This blends with Piketty’s views and they end up in the same place – a basic citizen’s income and even a basic capital allotment to each citizen, taxation on wealth, estate taxes, and the like.

We will have much to explore about these and other policy issues arising from the byproducts of our technology revolution in this century.

© 2020 Norman Jacknis, All Rights Reserved

A Budget That Copes With Reality

Five years ago, I wrote about the possibility of dynamic budgeting.  I was reminded of this again recently after reading Stephanie Kelton’s eye-opening new book, “The Deficit Myth”.

Her argument is that, since the U.S. dropped the gold standard and fixed exchange rates, it can create as much money as it wants.  The limit is not an illusory national debt number, but inflation.  And in an economy with less than full employment, inflation is not now an issue.  Her explanation of the capacity of the Federal government to spend leads to her suggestions for a more flexible approach to dealing with major economic and social issues.

Although Dr. Kelton was the former staff director for the Democrats on the Senate Budget Committee, she doesn’t devote many words to the tools used in budgeting.  However, the argument that she makes reminds me again that the traditional budget itself has to change, especially shifting to a dynamic budget.

While states and localities are not in the same position as the Federal government, they also face unpredictable conditions and could benefit from a more flexible, dynamic budget.  Of course, in the face of COVID and economic retraction the necessity of re-allocating funds has become more obvious.

In an earlier blog, I wrote about a simple tax app that is now feasible and also eliminates the bumps in incentives that are caused by our current, old-fashioned tax bracket scheme.   This was not using some untested, cutting-edge technology.  Instead, the solution could use phones, tablets and laptops doing simple calculations that these devices have done for decades.

Similarly, what is now well-established technology could be used to overcome the problems with traditional fixed budgeting.  (By the way, the same applies to the budgets that corporations devise.)

So, what are the problems that everyone knows exist with budgets?

  1. They’re wrong the day they are approved since they are trying to predict precisely a future that cannot be known precisely ahead of time. This error is made worse by the early deadlines in the typical budget process.  If you run a department, you are likely to be asked by the budget office to prepare estimates for what you’ll need in a period that will go as far as 18 or even 24 months into the future.
  2. It’s not clear how the estimates are derived. Typically, there are no underlying rules or models, just the addition of personnel and other basic costs that are adjusted from the last year.  This is despite the fact that some things are fairly well known.  For example, it is fairly straightforward to estimate the cost of paying unemployment to an average individual.  What is harder is to figure out how many unemployed people there will be – and, of course, you need to know the total number of unemployed and the average cost in order to compute the total amount of money needed.
  3. Given these problems, in practice during any given budget year, all kinds of exceptions and deviations occur in the face of reality. But the rest of the budget is not readjusted, although the budget staff will often hold back money that was approved as it takes from “Peter to pay Paul”.  The process often seems and is very arbitrary.

Operating in the real world, of course, requires continual adjustments.  Such adjustments can best be accommodated if the traditional fixed budget was replaced by a dynamic budget at the start of the budget process.

One way of doing this is familiar to almost every reader of this blog – the spreadsheet.  The cells in spreadsheets don’t always have hard fixed numbers, like fixed budgets.  Instead many of those spreadsheets have formulas.

And Congress could also not so much the individual amounts for each agency or program, but their relative priorities under different scenarios.  Thus, in a recession there would be a need for more unemployment insurance funding, but that would recede in the face of other priorities if the economy is booming.

To go back to the unemployment example, the actual amount needed in the budget will change as we get closer to the month being estimated and can be more accurate in its estimates of the number of people who will be unemployed.

Of course, the reader who knows my background won’t be surprised that I think the formulas in these cells could be derived by the use of some smart analytics and machine learning.  Ultimately, these methods could be enhanced with simulations – after all, what is a budget but an attempt to simulate a future period of financial needs?

More on that in another post sometime in the future.

© 2020 Norman Jacknis, All Rights Reserved

Words Matter In Building Intelligent Communities

The Intelligent Community Forum (ICF) is an international group of city, town and regional leaders as well as scholars and other experts who are focused on quality of life for residents and intelligently responding to the challenges and opportunities provided by a world and an economy that is increasingly based on broadband and technology.

To quote from their website: “The Intelligent Community Forum is a global network of cities and regions with a think tank at its center.  Its mission is to help communities in the digital age find a new path to economic development and community growth – one that creates inclusive prosperity, tackles social challenges, and enriches quality of life.”

Since 1999, ICF has held an annual contest and announced an award to intelligent communities that go through an extensive investigation and comparison to see how well they are achieving these goals.  Of hundreds of applications, some are selected for an initial, more in-depth assessment and become semi-finalists in a group called the Smart21.

Then the Smart21 are culled to a smaller list of the Top7 most intelligent communities in the world each year.  There are rigorous quantitative evaluations conducted by an outside consultancy, field trips, a review by an independent panel of leading experts/academic researchers and a vote by a larger group of experts.

An especially important part of the selection of the Top7 from the Smart21 is an independent panel’s assessment of the projects and initiatives that justify a community’s claim to being intelligent.

It may not always be clear to communities what separates these seven most intelligent communities from the rest.  After all, these descriptions are just words.  We understand that words matter in political campaigns.  But words matter outside of politics in initiatives, big and small, that are part of governing.

Could the words that leaders use be part of what separates successful intelligent initiatives from those of others who are less successful in building intelligent communities?

In an attempt to answer that question, I obtained and analyzed the applications submitted over the last ten years.  Then, using the methods of analytics and machine learning that I teach at Columbia University, I sought to determine if there was a difference in how the leaders of the Top7 described what they were doing in comparison with those who did not make the cut.

Although at a superficial level, the descriptions seem somewhat similar, it turns out that the leaders of more successful intelligent community initiatives did, indeed, describe those initiatives differently from the leaders of less successful initiatives.

The first significant difference was that the descriptions of the Top7 had more to say about their initiatives, since apparently they had more accomplishments to discuss.  Their descriptions had less talk about future plans and more about past successes.

In describing the results of their initiatives so far, they used numbers more often, providing greater evidence of those results.  Even though they were discussing technology-based or otherwise sometimes complex projects, they used more informal, less dense and less bureaucratic language.

Among the topics they emphasized, engagement and leadership as well as the technology infrastructure primarily stood out.  Less important, but also a differentiation, the more successful leaders emphasized the smart city, innovation and economic growth benefits.

For those leaders who wish to know what will gain them recognition for real successes in transforming their jurisdictions into intelligent communities, the results would indicate these simple rules:

  • Have and highlight a solid technology infrastructure.
  • True success, however, comes from extensive civic engagement and frequently mentioning that engagement and the role of civic leadership in moving the community forward.
  • Less bureaucratic formality and more stress on results (quantitative measures of outcomes) in their public statements is also associated with greater success in these initiatives.

On the other hand, a laundry list of projects that are not tied to civic engagement and necessary technology, particularly if those projects have no real track record, is not the path to outstanding success – even if they check off the six wide-ranging factors that the ICF expects of intelligent communities.

While words do matter, it is also true that other factors can impact the success or failure of major public initiatives.  However, these too can be added into the models of success or failure, along with the results of the textual analytics.

Overall, the results of this analysis can help public officials understand a little better how they need to think about what they are doing and then properly describe it to their citizens and others outside of their community.  This will help them to be more successful, most importantly for their communities and, if they wish, as well in the ICF awards process.

© 2020 Norman Jacknis, All Rights Reserved

Working From Home Will Change Cities

Just three years ago, the New York Times had this headline Why Big Cities Thrive, and Smaller Ones Are Being Left Behind” – trumpeting the victory of big cities over their smaller competitors, not to mention the suburbs and rural areas.  At the top of that heap, of course, was New York City.

Now the headlines are different:

A week ago, the always perceptive Claire Cain Miller added another perspective in an Upshot article that was headlined with the question “Is the Five-Day Office Week Over?”  Her answer, in the sub-title, was that the “pandemic has shown employees and employers alike that there’s value in working from home — at least, some of the time.”

This chart summarizes a part of what she wrote about.  As Miller’s story makes quite clear, it is important to realize that some of what has happened during the COVID pandemic will continue after we have finally overcome it and people are free to resume activities anywhere.  Some of the current refugees from cities will likely move back to the cities and many city residents remained there, of course.  But the point is that many of these old, returning and new urban residents will have different patterns of work and that will require cities to change.

While the focus of this was mostly on remote office work, some observers note that cities still have lots of workers who do not work in offices.  While clearly there are numerous jobs that require the laying of hands on something or someone, there are also blue-collar jobs that do not strictly require a physical presence.

I have seen factories that can be remotely controlled, even before the pandemic.  Now this option is getting even more attention.  One of the technology trade magazines, recently (7/3/2020) had a storied with this headline – “Remote factories: The next frontier of remote work.”  In another example, GE has been offering technology solutions to enable the employees of utility companies to remotely control facilities – see “Remote Control: Utilities and Manufacturers Turn to Automation Software To Operate From Home During Outbreak”.

So perhaps the first blush of victory of big cities, like the British occupation of New York City during the American Revolution or the invasion of France in World War II, did not indicate how the war would end.  Perhaps the war has not ended because, in an internet age where many people can work from home, home does not have to be in big cities, after all, or if it is in a big city it does not have to be in a gleaming office tower.

These trends and the potential of the internet and technology to disrupt traditional urban patters, of course, have been clear for more than ten years.  But few mayors and other urban leaders paid attention.  After all they were in a recent period in which they could just ride the wave of what seemed to be ever increasing density and growth in cities – especially propelled by young people seeking office jobs in their cities.  This was a wonderful dream, combining the urban heft of the industrial age with cleaner occupations.

Now the possibility of a different world is hitting them in the face.  It is not merely a switch from factory to office employment, but a change from industrial era work patterns too.  Among other things that change means that people do not all have to show up in the same place at the same time.  This change requires city leaders to start thinking about all the various ways that they need to adjust their traditional thinking.

Here are just three of the ways that cities will be impacted by an increasing percentage of work being done at home:

  • Property taxes in most cities usually have higher rates on commercial property than on residential property. Indeed, commercial real estate has been the goose that has laid the golden eggs for those cities which have had flourishing downtowns.  But if the amount of square footage in commercial property decreases, the value of those properties and hence the taxes will go down.  On the other hand, most elected officials are loath to raise taxes on residential real estate, even if those residences are now generating income through commercial activities – a job at home most of the week.
  • Traffic and transit patterns used to be quite predictable. There was rush hour in the morning and afternoon when everyone was trying to get the same densely packed core.  With fewer people coming to the office every day that will change.  Even those who meet in downtown may not be going there now for the 9:00 AM start of the work day, but for a lunch meeting.  Then there is the matter of increasing and relatively small deliveries to homes, rather than large deliveries to stores in the central business district.  This too turns upside down the traditional patterns.
  • Excitement and enticement have, of course, been traditional advantages of cities. Downtown is where the action is.  Even that is changing.  Although it is still fun to go to Broadway, for example, I suspect that most people had a better view of the actors in the Disney Plus presentation of Hamilton than did those who paid a lot more money to sit somewhere many rows back even in the orchestra section of the theater.  At some point, people will balance this out.  So, cities are going to have be a lot more creative and find new ways, new magic to bring people to their core.

Cities have evolved before.  In the 18th century, American cities thrived on the traffic going through their ports.  While the ports still played a role, in later centuries, cities grew dramatically and thrived on their factories and industrial might.  Then they replaced factories with offices.

A transition to an as yet unclear future version of cities can be done and will be done successfully by those city leaders who don’t deny what is happening, but instead respond with a new vision – or at least new experimentation that they can learn from.

© 2020 Norman Jacknis, All Rights Reserved

Is It 1832 Or 2020? Virtual Convention Or Something New?

In these blogs, I’ve often noted how people seem wedded to old ways of thinking, even when those old ways are dressed up in new clothes.

Despite all the technology around us, it’s amazing how little some things have changed.  Too often, today seems like it was 120 years ago when people talked and thought about “horseless carriages” rather than the new thing that was possible – the car with all the possibilities it opened.

So it was with interest that I read this recent story – “Democrats confirm plans for nearly all-virtual convention

“Democrats will hold an almost entirely virtual presidential nominating convention Aug. 17-20 in Milwaukee using live broadcasts and online streaming, party officials said Wednesday.”

Party conventions have been around since 1832.  They were changed a little bit when they went on radio and then later on television.  But mostly they have always been filled with lots of people hearing speeches, usually from the podium.

Following in this tradition going back to 1832, the Democratic Party is going to have a convention, but we can’t have lots of people gathered together with COVID-19.  This one will be “a virtual convention in Milwaukee” which seems like a contradiction – something that is both virtual but is happening in a physical place?  I guess it only means that Joe Biden will be in Milwaukee along with the convention officials to handle procedures.

Indeed, it’s not entirely clear what this convention will look like.  In addition to the main procedures in Milwaukee, the article indicates that “Democrats plan other events in satellite locations around the country to broadcast as part of the convention”.  I assume that will be similar.

“Kirshner knows how it’s done: He has produced every Democratic national convention since 1992.”

Hopefully this will be different from every convention since 1832 – or even 1992!

Instead of the standard speeches on the screen or even other activities that are just video of something that could occur on-stage, do something that is more up-to-date.  This will show that Biden will not only be a different kind of President than Trump, but that he also will know how to lead us into the future.

Why not do something that takes advantage of not having to be in a convention hall?

For example, how about a walk (or drive, if necessary) through the speaker’s neighborhood (masks on) explaining what the problems are and what Biden wants to do about those problems?

My suggestions are limited since creative arts are not my specialty, but I do see an opportunity to do something different.  It is a good guess that Hollywood is also eager to help defeat Trump and would offer all kinds of innovative assistance.  Make it an illustration of American collaboration at its best.

This should not be an unusual idea for the Biden organization.  Among his top advisors are Zeppa Kreager, his Chief of Staff, formerly the Director of the Creative Alliance (part of Civic Nation), and Kate Bedingfield, Deputy Campaign Manager and Communications Director, formerly Vice President at Monumental Sports and Entertainment.

Of course, the Trump campaign could take the same approach, but they do not seem interested and Trump obviously adores a large in-person audience.  So there is a real opportunity for Biden to differentiate himself.

Beyond the short-term electoral considerations, this would also make political history by setting a new pattern for political conventions.

© 2020 Norman Jacknis, All Rights Reserved

Trump And Cuomo COVID-19 Press Conferences

Like many other people who have been watching the COVID-19 press conferences held by Trump and Cuomo, I came away with a very different feeling from each.  Beyond the obvious policy and partisan differences, I felt there is something more going on.

Coincidentally, I’ve been doing some research on text analytics/natural language processing on a different topic.  So, I decided to use these same research tools on the transcripts of their press conferences from April 9 through April 16, 2020.  (Thank you to the folks at Rev.com for making available these transcripts.)

One of the best approaches is known by its initials, LIWC, and was created some time ago by Pennebaker and colleagues to assess especially the psycho-social dimensions of texts.   It’s worth noting that this assessment is based purely on the text – their words – and doesn’t include non-verbal communications, like body language.

While there were some unsurprising results to people familiar with both Trump and Cuomo, there are also some interesting nuances in the words they used.

Here are the most significant contrasts:

  • The most dramatic distinction between the two had to do with emotional tone. Trump’s words had almost twice the emotional content of Cuomo’s, including words like “nice”, although maybe the use of that word maybe should not be taken at face value.
  • Trump also spoke of rewards/benefits and money about 50% more often than Cuomo.
  • Trump emphasized allies and friends about twenty percent more often than Cuomo.
  • Cuomo used words that evoked health, anxiety/pain, home and family two to three times more often than Trump.
  • Cuomo asked more than twice as many questions, although some of these could be sort of rhetorical – like “what do you think?”
  • However, Trump was 50% more tentative in his declarations than Cuomo, whereas Cuomo had greater expressions of certainty than Trump.
  • While both men spoke about the present tense much more than the future, Cuomo’s use of the present was greater than Trump’s. On the other hand, Trump’s use of the future tense and the past tense was greater than Cuomo’s.
  • Trump used “we” a little more often than Cuomo and much more than he used “you”. Cuomo used “you” between two and three times more often than Trump.  Trump’s use of “they” even surpassed his use of you.

Distinctions of this kind are never crystal clear, even with sophisticated text analytics and machine learning algorithms.  The ambiguity of human speech is not just a problem for machines, but also for people communicating with each other.

But these comparisons from text analytics do provide some semantic evidence for the comments by non-partisan observers that Cuomo seems more in command.  This may be because the features of his talks would seem to better fit the movie portrayal and the average American’s idea of leadership in a crisis – calm, compassionate, focused on the task at hand.

© 2020 Norman Jacknis, All Rights Reserved

More Than A Smart City?

The huge Smart Cities New York 2018 conference started today. It is billed as:

“North America’s leading global conference to address and highlight critical solution-based issues that cities are facing as we move into the 21st century. … SCNY brings together top thought leaders and senior members of the private and public sector to discuss investments in physical and digital infrastructure, health, education, sustainability, security, mobility, workforce development, to ensure there is an increased quality of life for all citizens as we move into the Fourth Industrial Revolution.”

A few hours ago, I helped run an Intelligent Community Forum Workshop on “Future-Proofing Beyond Tech: Community-Based Solutions”. I also spoke there about “Technology That Matters”, which this post will quickly review.

As with so much of ICF’s work, the key question for this part of the workshop was: Once you’ve laid down the basic technology of broadband and your residents are connected, what are the next steps to make a difference in residents’ lives?

I have previously focused on the need for cities to encourage their residents to take advantage of the global opportunities in business, education, health, etc. that becomes possible when you are connected to the whole world.

Instead in this session, I discussed six steps that are more local.

1. Apps For Urban Life

This is the simplest first step and many cities have encouraged local or not-so-local entrepreneurs to create apps for their residents.

But many cities that are not as large as New York are still waiting for those apps. I gave the example of Buenos Aires as a city that didn’t wait and built more than a dozen of its own apps.

I also reminded attendees that there are many potential, useful apps for their residents which cannot justify enough profit to be of interest to the private sector, so the government will have to create these apps on their own.

2. Community Generation Of Urban Data

While some cities have posted their open data, there is much data about urban life that the residents can collect. The most popular example is the community generation of environmental data, with such products like the Egg, the Smart Citizen Kit for Urban Sensing, the Sensor Umbrella and even more sophisticated tools like Placemeter.

But the data doesn’t just have to be about the physical environment. The US National Archives has been quite successful in getting citizen volunteers to generate data – and meta-data – about the documents in its custody.

The attitude which urban leaders need is best summarized by Professor Michael Batty of the University College London:

“Thinking of cities not as smart but as a key information processor is a good analogy and worth exploiting a lot, thus reflecting the great transition we are living through from a world built around energy to one built around information.”

3. The Community Helps Make Sense Of The Data

Once the data has been collected, someone needs to help make sense of it. This effort too can draw upon the diverse skills in the city. Platforms like Zooniverse, with more than a million volunteers, are good examples of what is called citizen science. For the last few years, there has been OpenData Day around the world, in which cities make available their data for analysis and use by techies. But I would go further and describe this effort as “popular analytics” – the virtual collaboration of both government specialists and residents to better understand the problems and patterns of their city.

4. Co-Creating Policy

Once the problems and opportunities are better understood, it is time to create urban policies in response.  With the foundation of good connectivity, it becomes possible for citizens to conveniently participate in the co-creation of policy. I highlighted examples from the citizen consultations in Lambeth, England to those in Taiwan, as well as the even more ambitious CrowdLaw project that is housed not far from the Smart Cities conference location.

5. Co-Production Of Services

Then next is the execution of policy. As I’ve written before, public services do not necessarily always have to be delivered by paid civil servants (or even better paid companies with government contracts). The residents of a city can help be co-producers of services, as exemplified in Scotland and New Zealand.

6. Co-Creation Of The City Itself

Obviously, the people who build buildings or even tend to gardens in cities have always had a role in defining the physical nature of a city. What’s different in a city that has good connectivity is the explosion of possible ways that people can modify and enhance that traditional physical environment. Beyond even augmented reality, new spaces that blend the physical and digital can be created anywhere – on sidewalks, walls, even in water spray. And the residents can interact and modify these spaces. In that way, the residents are constantly co-creating and recreating the urban environment.

The hope of ICF is that the attendees at Smart Cities New York start moving beyond the base notion of a smart city to the more impactful idea of an intelligent city that uses all the new technologies to enhance the quality of life and engagement of its residents.

© 2018 Norman Jacknis, All Rights Reserved

Are Any Small Towns Flourishing?

We hear and read how the very largest cities are growing, attractive places for millennials and just about anyone who is not of retirement age. The story is that the big cities have had almost all the economic gains of the last decade or so, while the economic life has been sucked out of small towns and rural areas.

The images above are what seem to be in many minds today — the vibrant big city versus the dying countryside.

Yet, we are in a digital age when everyone is connected to everyone else on the globe, thanks to the Internet. Why hasn’t this theory of economic potential from the Internet been true for the countryside?

Well, it turns out that it is true. Those rural areas that do in fact have widespread access to the Internet are flourishing. These towns with broadband are exemplary, but unfortunately not the majority of towns.

Professor Roberto Gallardo of Purdue’s Purdue Center for Regional Development has dug deep into the data about broadband and growth. The results have recently been published in an article that Robert Bell and I helped write. You can see it below.

So, the implication of the image above is half right — this is a life-or-death issue for many small towns. The hopeful note is that those with broadband and the wisdom to use it for quality of life will not die in this century.

© 2018 Norman Jacknis, All Rights Reserved


[This article is republished from the Daily Yonder , a non-profit media organization that specializes in rural trends and thus filling the vacuum of news coverage about the countryside.]

When It Comes to Broadband, Millennials Vote with Their Feet

By Roberto Gallardo — Robert Bell — Norman Jacknis

April 11, 2018

When they live in remote rural areas, millennials are more likely to reside in a county that has better digital access. The findings could indicate that the digital economy is helping decentralize the economy, not just clustering economic change in the cities that are already the largest.

Sources: USDA; Pew Research; US Census Bureau; Purdue Center for Regional Development This graph shows that the number of Millennials and Gen Xers living in the nation’s most rural counties is on the increase in counties with a low “digital divide index.” The graph splits the population in “noncore” (or rural) counties into three different generations. Then, within each generation, the graph looks at population change based on the Digital Divide Index. The index measures the digital divide using two sets of criteria, one that looks at the availability and adoption of broadband and another set that looks at socio-economic factors such as income and education levels that affect broadband use. Counties are split into five groups or quintiles based on the digital divide index, with group №1 (orange) having the most access and №5 (green) having the lowest.

Cities are the future and the countryside is doomed, as far as population growth, jobs, culture and lifestyle are concerned. Right?

Certainly, that is the mainstream view expressed by analysts at organizations such as Brookings. This type of analysis says the “clustering” of business that occurred during the industrial age will only accelerate as the digital economy takes hold. This argument says digital economies will only deepen and accelerate the competitive advantage that cities have always had in modern times.

But other pundits and researchers argue that the digital age will result in “decentralization” and a more level playing field between urban and rural. Digital technologies are insensitive to location and distance and potentially offer workers a much greater range of opportunities than ever before.

The real question is whether a rural decline is inevitable or if the digital economy has characteristics that are already starting to write a different story for rural America. We have recently completed research that suggests it is.

Millennial Trends

While metro areas still capture the majority of new jobs and population gains, there is some anecdotal evidence pointing in a different direction. Consider a CBS article that notes how, due to high housing costs, horrible traffic, and terrible work-life balances, Bend, Oregon, is seeing an influx of teleworkers from Silicon Valley. The New York Times has reported on the sudden influx of escapees from the Valley that is transforming Reno, Nevada — for good or ill, it is not yet clear.

Likewise, a Fortune article argued that “millennials are about to leave cities in droves” and the Telegraph mentioned “there is a great exodus going on from cities” in addition to Time magazine reporting that the millennial population of certain U.S. cities has peaked.

Why millennials? Well, dubbed the first digital-native generation, their migration patterns could indicate the beginning of a digital age-related decentralization.

An Age-Based Look at Population Patterns

In search of insight, we looked at population change among the three generations that make up the entire country’s workforce: millennials, generation X, and baby boomers.

First, we defined each generation. Table 1 shows the age ranges of each generation according to the Pew Research Center, both in 2010 and 2016, as well as the age categories used to measure each generation. While not an exact match, categories are consistent across years and geographies.

In addition to looking at generations, we used the Office of Management core-based typology to control by county type (metropolitan, small city [micropolitan], and rural [noncore]). To factor in the influence of digital access affects local economies, we used the Digital Divide Index. The DDI, developed by the Purdue Center for Regional Development, ranges from zero to 100. The higher the score, the higher the digital divide. There are two components to the Digital Divide Index: 1) broadband infrastructure/adoption and 2) socioeconomic characteristics known to affect technology adoption.

Looking at overall trends, it does look like the digital age is not having a decentralization effect. To the contrary, according to data from the economic modeling service Emsi, the U.S. added 19.4 million jobs between 2010 and 2016. Of these, 94.6 percent were located in metropolitan counties compared to only 1.6 percent in rural counties.

Population growth tells a similar story. Virtually the entire growth in U.S. population of 14.4 million between 2010 and 2016 occurred in metropolitan counties, according to the Census Bureau. The graph below (Figure 1) shows the total population change overall and by generation and county type. As expected, the number of baby boomers (far right side of the graph) is falling across all county types while millennials and generation x (middle two sets of bars) are growing only in metro counties.

But there is a different story. When looking at only rural counties (what the OMB classification system calls “noncore”) divided into five equal groups or quintiles based on their digital divide (1 = lowest divide while 5 = highest divide), the figure at the very top of this article shows that rural counties experienced an increase in millennials where the digital divide was lowest. (The millennial population grew by 2.3 percent in rural counties where the digital divide was the lowest.) Important to note is that this same pattern occurs in metropolitan and small city counties as well.

Impact on the “Really Rural” County

“Urban” and “rural” can be tricky terms when it comes to demographics. The Census Bureau reports that 80% of the population lives in urban areas. Seventy-five percent of those “urban” areas, however, are actually small towns with populations of under 20,000. They are often geographically large, with a population density that falls off rapidly once you leave the center of town.

On the other hand, some rural counties are adjacent to metro areas and may benefit disproportionately from their location or even be considered metropolitan due to their commuting patterns. Because of this, we turned to another typology developed by the U.S. Department of Agriculture Economic Research Service that groups counties into nine types ranging from large metro areas to medium size counties adjacent to metro areas to small counties not adjacent to metro areas.

Figure 3 (below) shows counties considered completely rural or with an urban population of less than 2,500, not adjacent to a metro area. Among these counties, about 420 in total, those with the lowest digital divide experienced a 13.5 percent increase in millennials between 2010 and 2016. In other words, in the nation’s “most rural” counties, the millennial population increased significantly when those counties had better broadband access.

Sources: USDA; Pew Research; US Census Bureau; Purdue Center for Regional Development. This graph shows population change by generation and “DDI” quintile in the nation’s most rural counties (rural counties that are farthest from metropolitan areas). In rural counties with the best digital access (a low digital divide index), the number of Millennials and Gen Xers increased.

The New Connected Countryside: A Work in Progress

To conclude, if you just look at overall numbers, our population seems to be behaving just like they did in the industrial age — moving to cities where jobs and people are concentrated. Rural areas that lag in broadband connectivity and digital literacy will continue to suffer from these old trends.

However, the digital age is young. Its full effects are still to be felt. Remember it took several decades for electricity or the automobile to revolutionize society. Besides, areas outside metro areas lag in broadband connectivity and digital literacy, limiting their potential to leverage the technology to affect their quality of life, potentially reversing migration trends.

Whether or not decentralization will take place remains to be seen. What is clear though is that (while other factors are having an impact, as well) any community attempting to retain or attract millennials need to address their digital divide, both in terms of broadband access and adoption/use.

In other words, our data analysis suggests that if a rural area has widely available and adopted broadband, it can start to successfully attract or retain millennials.

Roberto Gallardo is assistant director of the Purdue Center for Regional Development and a senior fellow at the Center for Rural Strategies, which publishes the Daily Yonder. Robert Bell is co-founder of the Intelligent Community Forum. Norman Jacknis is a senior fellow at the Intelligent Community Forum and on the faculty of Columbia University.

Broadband Networks & NYC Subways

[This was originally published on June 20, 2011 and it was posted on a blog for government leaders, October 12, 2009.]

Many governments around the world are struggling to find the best method to get broadband networks created within their areas.  (Maybe it is the USA which is especially struggling.)

I thought about some historical precedents for major local infrastructure projects.  While the US Interstate Highway system is often cited as such a precedent, it falls short of representing the current debate because no one proposed in the 1950s that we should “let the private sector do it.”

But the huge New York City rail transit system is perhaps a better historical analogy.  It is important to note that the way the current system operates – as a single government owned and operated system – is not how it started or operated for many of its early years.

It seems that New York City government used every possible method including:

  • Let private companies own, build and run mass transit lines.  (Then take them over when they fail – due to underlying economic properties of such infrastructure which makes them more like public goods than private goods that can sustain a profit.)
  • Own the rights to the transit line yourself, but let a private company build and operate it.
  • Build the transit line yourself, but let a private company operate it.
  • Build the transit line and also run it.
  • Fake it – act as if a new transit line is going to be run and built by a private company, but do it yourself when no private company does so.

One other aspect of this history is of interest, which is the use of the “dual contracts.”  Those allowed more than one rail operator to use the same tracks and is analogous to the open network approach in today’s broadband world – whether the fiber backbone of broadband networks should be open to all users.

This opportunistic strategy perhaps made it easier and quicker for New York City to bring its great transit system to life.  Of course, eventually, this same lack of coherence created future problems and inefficiencies.  And by the time the great expansion of transit lines was finished, the government ended up owning and operating the whole system and sporadically filling some of the remaining unserved areas.

Was the trade-off of a fast growth opportunistic strategy against longer term problems worth it?  Given the success and the role that the subways have played in New York City’s development, the answer is likely yes.

I’ve combined excerpts from a couple different sources (especially the now ubiquitous Wikipedia) to highlight some aspects of that system’s history. …

———————–

History of the New York City Subway

The beginnings of the Subway came from various excursion railroads to Coney Island and elevated railroads in Manhattan and Brooklyn. At that time, New York County (Manhattan Island and part of the Bronx), Kings County (including the Cities of Brooklyn and Williamsburg) and Queens County were separate political entities.

In New York, competing steam-powered elevated railroads were built over major avenues. The first elevated line was constructed in 1867-70 by Charles Harvey and his West Side and Yonkers Patent Railway company along Greenwich Street and Ninth Avenue (although cable cars were the initial mode of transportation on that railway). Later more lines were built on Second, Third and Sixth Avenues. None of these structures remain today, but these lines later shared trackage with subway trains as part of the IRT system.

In Kings County [Brooklyn], elevated railroads were also built by several companies. These also later shared trackage with subway trains, and even operated into the subway, as part of the BRT and BMT. These lines were linked to Manhattan by various ferries and later the tracks along the Brooklyn Bridge (which originally had their own line, and were later integrated into the BRT/BMT).  Also in Kings County, six steam excursion railroads were built to various beaches in the southern part of the county; all but one eventually fell under BMT control.

In 1898, New York, Kings and Richmond Counties, and parts of Queens and Westchester Counties and their constituent cities, towns, villages and hamlets were consolidated into the City of Greater New York. During this era the expanded City of New York resolved that it wanted the core of future rapid transit to be underground subways, but realized that no private company was willing to put up the enormous capital required to build beneath the streets.

The City decided to issue rapid transit bonds outside of its regular bonded debt limit and build the subways itself, and contracted with the IRT (which by that time ran the elevated lines in Manhattan) to equip and operate the subways, sharing the profits with the City and guaranteeing a fixed five-cent fare.

The Interborough Rapid Transit (IRT) subway opened in 1904. The city contracted construction of the line to the IRT Company, ownership was always held by the city. The IRT built, equipped, and operated the line under a lease from the city. The IRT also leased the Manhattan Railway elevated lines in Manhattan and the Bronx for 999 years!

In Brooklyn, the various elevated railroads and many of the surface steam railroads, as well as most of the trolley lines, were consolidated under the BRT. Some improvements were made to these lines at company expense during this era.  Then the Brooklyn-Manhattan Transit (BMT, formerly the Brooklyn Rapid Transit, BRT) was the rapid transit company which built, bought, or assumed control of the Brooklyn elevated lines.

The BRT, which just barely entered Manhattan via the Brooklyn Bridge, wanted the opportunity to compete with the IRT, and the IRT wanted to extend its Brooklyn line to compete with the BRT. This led to the City’s agreeing to contract for future subways with both the BRT and IRT.  The expansion of rapid transit was greatly facilitated by the signing of the Dual Contracts in 1913. Finished mostly by 1920, some of the new lines had trains operated by both companies.

The majority of the present-day subway system was either built or improved under [four sequential] contracts to the IRT and BRT

The City, bolstered by political claims that the private companies were reaping profits at taxpayer expense, determined that it would build, equip and operate a new system itself, with private investment and without sharing the profits with private entities. This led to the building of the Independent City-Owned Subway (ICOS), sometimes called the Independent Subway System — that was not connected to the IRT or BMT lines. This system consisted of entirely subway construction with only one elevated portion.

As the first line neared completion, New York City offered it for private operation as a formality, knowing that no operator would meet its terms. Thus the city declared that it would operate it itself, formalizing a foregone conclusion. The first line opened without a formal ceremony..

Only two new lines were opened [later], the IRT Dyre Avenue Line (1941) and the IND Rockaway Line (1956). Both of these lines were rehabilitations of existing railroad rights-of-way rather than new construction.

In June 1940, the transportation assets of the former BMT and IRT systems were taken over by the City of New York for operation by the City’s Board of Transportation, which already operated the IND system.  After city takeover of the bankrupt BMT and IRT companies, many of the elevated lines were closed, and a slow “unification” took place, marked notably by establishment of several free transfer points between divisions in 1948 and a few points of through running between IND and BMT lines beginning in 1954.

A combination of factors had this takeover coincide with the end of the major rapid transit building eras in New York City. The City immediately began to eliminate what it considered redundancy in the system, closing several elevated lines.

[But] Because the early subway systems competed with each other, they tended to cover the same areas of the city, leading to much overlapping service. The amount of service has actually decreased since the 1940s as many elevated railways were torn down, and finding funding for underground replacements has proven difficult.

Despite the unification, a distinction between the three systems survives in the service labels: IRT lines (now referred to as A Division) have numbers and BMT/IND (now collectively B Division) lines use letters. There is also a more physical but less obvious difference: Division A cars are narrower than those of Division B by 18 inches (~45cm) and shorter by 9 to 24 feet (~2.7 to 7.3m).  An BMT/IND style train cannot fit into an IRT tunnel (the numbered lines and the 42nd Street Shuttle). An IRT train CAN fit into a BMT/IND tunnel but since it is narrower the distance from car to platform is unsafe. Cars from the IRT division are moved using BMT/IND tracks to Coney Island Overhaul Shops for major maintenance on a regular basis.  Division B equipment could operate on much of Division A if station platforms were trimmed and trackside furniture moved. Being able to do so would increase the capacity of Division A. However, there is virtually no chance of this happening because the portions of Division A that could not accommodate Division B equipment without major physical reconstruction are situated in such a way that it would be impossible to put together coherent through services.

© 2011 Norman Jacknis

Gold Mining

[Published 6/18/2011 and originally posted for government leaders, July 6, 2009]

My last posting was about the “goldmine” that exists in the information your government collects every day. It’s a goldmine because this data can be analyzed to determine how to save money by learning what policies and programs work best. Some governments have the internal skills to do this kind of sophisticated analysis or they can contract for those skills. But no government – not even the US Federal government – has the resources to analyze all the data they have.

What can you do about that? Maybe there’s an answer in a story about real gold mining from the authors of the book “Wikinomics”[1]:

A few years back, Toronto-based gold mining company Goldcorp was in trouble. Besieged by strikes, lingering debts, and an exceedingly high cost of production, the company had terminated mining operations…. [M]ost analysts assumed that the company’s fifty-year old mine in Red Lake, Ontario, was dying. Without evidence of substantial new gold deposits, Goldcorp was likely to fold. Chief Executive Officer Rob McEwen needed a miracle.

Frustrated that his in-house geologists couldn’t reliably estimate the value and location of the gold on his property … [he] published his geological data on the Web for all to see and challenged the world to do the prospecting. The “Goldcorp Challenge” made a total of $575,000 in prize money available to participants who submitted the best methods and estimates. Every scrap of information (some 400 megabytes worth) about the 55,000 acre property was revealed on Goldcorp’s Web site.

News of the contest spread quickly around the Internet and more than 1,000 virtual prospectors from 50 countries got busy crunching the data. Within weeks, submissions from around the world were flooding into Goldcorp headquarters. There were entries from graduate students, management consultants, mathematicians, military officers, and a virtual army of geologists. “We had applied math, advanced physics, intelligent systems, computer graphics, and organic solutions to inorganic problems. There were capabilities I had never seen before in the industry,” says McEwen. “When I saw the computer graphics, I almost fell out of my chair.”

The contestants identified 110 targets on the Red Lake property, more than 80% of which yielded substantial quantities of gold. In fact, since the challenge was initiated, an astounding 8 million ounces of gold have been found – worth well over $3 billion. Not a bad return on a half million dollar investment.

You probably won’t be able to offer a prize to analysts, although you might offer to share some of the savings that result from doing things better. But, since the public has an interest in seeing its government work better, unlike a private corporation, maybe you don’t have to offer a prize.And there are many examples on the Internet where people are willing to help out without any obvious monetary reward.

Certainly not everyone, but enough people might be interested in the data to take a shot of making sense of it – students or even college professors looking for research projects, retired statisticians, the kinds of folks who live to analyze baseball statistics, and anyone who might find this a challenge.

The Obama administration and its new IT leaders have made a big deal about putting its data on the Web. There are dozens of data sets on the Federal site data.gov[2], obviously taking care to deal with issues of individual privacy and national security. Although their primary interest is in transparency of government, now that the data is there, we’ll start to see what people out there learn from all that information. Alabama[3] and the District of Columbia, among others, have started to do the same thing.

You can benefit a lot more, if you too make your government’s data available on the web for analysis. Then your data, perhaps combined with the Federal data and other sources on the web, can provide you with an even better picture of how to improve your government – better than just using your own data alone.

  1. “Innovation in the Age of Mass Collaboration”, Business Week, Feb. 1, 2007 http://www.businessweek.com/innovate/content/feb2007/id20070201_774736.htm
  2. “Data.gov open for business”, Government Computer News, May 21, 2009, http://gcn.com/articles/2009/05/21/federal-data-website-goes-live.aspx
  3. “Alabama at your fingertips”, Government Computer News, April 20, 2009, http://gcn.com/articles/2009/04/20/arms-provides-data-maps-to-agencies.aspx

© 2011 Norman Jacknis

Beyond The Inbox And Outbox

[Re-published 5/18/2011.  This was originally posted on the web on June 15, 2009 for elected executives of governments.]

Every day, the employees of your government follow the same routine.

They have a stack of problems, applications, forms and the like in their inbox.  It may be a real, old-fashioned inbox with lots of paper or the computer-based equivalent. Doing the best they can, they then work through the pile and, we hope, with wisdom and efficiency, they process the incoming tasks and then move them to the outbox. As far as many employees are concerned, their work is done when the thing is put in the outbox.

However, for the people who run the government, this represents more than a ledger of what came in and what went out.  It is a gold mine of information.  Especially because of all the automation that has been put in place in government agencies, it is also an easily accessible gold mine.

Unfortunately, this gold mine is often ignored.  But if that data is analyzed, you will discover the patterns that can help you improve government programs and policies. Consider two examples, from very different areas, of what statistical analysis of that data can tell you:

What kinds of programs have worked best for which kinds of prisoners?  (This knowledge can be used to come up with better treatment and assignment of prisoners at intake.)

Who has used the public golf courses at what times of the week and day?  (This can identify where you might want to offer new programs targeted at particular groups of residents to even out usage during the day and get more golf fees.)

In 2007, Professor Ian Ayres wrote a book, “SuperCrunchers: Why Thinking-By-Numbers Is The New Way To Be Smart”, in which he described how various organizations are using statistical analysis to dramatically improve their performance.

One of its chapters, “Government By Chance”, provides public sector examples and offers an interesting idea.

“Imagine a world where people looked to the IRS as a source for useful information. The IRS could tell a small business that it might be spending too much on advertising or tell an individual that the aver age taxpayer in her income bracket gave mote to charity or made a larger IRA contribution. Heck, the IRS could probably produce fairly accurate estimates about the probability that small businesses (or even marriages) would fail. In fact, I’m told that Visa already does predict the probability of divorce based on credit card purchases (so that it can make better predictions of default risk). Of course, this is all a bit Orwellian. I might not particularly want to get a note from the IRS saying my marriage is at risk. But I might at least want the option of having the government make predictions about various aspects of my life. Instead of thinking of the IRS as solely a taker, we might also think of it as an information provider. We could even change its name to the Information & Revenue Service”.

This is yet another example, though, of moving the public sector from a transactional view of citizens to something more helpful.  While even the author admits the IRS example is a scary, there are other possibilities that are not scary and that your residents would like.

The use of the data the government collects for better policy and better service to citizens is what I call “learning how to drive the government” because it is different from the usual fad and fashion approach to policy.

Too often policy debates are like a driver in a car who cannot see outside the windows.  So the driver keeps going until the car hits a wall, at which point the usual reaction is to go in the opposite direction until the same thing happens again.  This accounts for the feeling of a pendulum swinging in public policy debates, rather than real learning occurring.

When everyday data is analyzed, it is like being able to look out the windows and figure out what direction to drive.

© 2011 Norman Jacknis