Technology and Trust

A couple of weeks ago, along with the Intelligent Community Forum (ICF) co-founder, Robert Bell, I had the opportunity to be in a two-day discussion with the leaders of Tallinn, Estonia — via Zoom, of course. As part of ICF’s annual selection process for the most intelligent community of the year, the focus was on how and why they became an intelligent community.

They are doing many interesting things with technology both for e-government as well as more generally for the quality of life of their residents. One of their accomplishments, in particular, has laid the foundation for a few others — the strong digital identities (and associated digital signatures) that the Estonian government provides to their citizens. Among other things, this enables paperless city government transactions and interactions, online elections, COVID contact warnings along with protection/tracking of the use of personal data.

Most of the rest of the world, including the US, does not have strong, government-issued digital identities. The substitutes for that don’t come close — showing a driver’s license at a store in the US or using some third party logon.

Digital identities have also enabled an E-Residency program for non-Estonians, now used by more than 70,000 people around the world.

As they describe it, in this “new digital nation … E-Residency enables digital entrepreneurs to start and manage an EU-based company online … [with] a government-issued digital identity and status that provides access to Estonia’s transparent digital business environment”

This has also encouraged local economic growth because, as they say, “E-Residency allows digital entrepreneurs to manage business from anywhere, entirely online … to choose from a variety of trusted service providers that offer easy solutions for remote business administration.” The Tallinn city leaders also attribute the strength of a local innovation and startup ecosystem to this gathering of talent from around the world.

All this would be a great story, unusual in practice, although not unheard of in discussions among technologists — including this one. As impressive as that is, it was not what stood out most strongly in the discussion which was Tallinn’s unconventional perspective on the important issue of trust.

Trust among people is a well-known foundation for society and government in general. It is also essential for those who wish to lead change, especially the kind of changes that result from the innovations we are creating in this century.

I often hear various solutions to the problem of establishing trust through the use of better technology — in other words, the belief that technology can build trust.

In Tallinn’s successful experience with technology, cause-and-effect go more in the opposite direction. In Tallinn, successful technology is built on trust among people that had existed and is continually maintained regardless of technology.

While well-thought out good technology can also enhance trust to an extent, in Tallinn, trust comes first.

This is an important lesson to keep in mind for technologists who are going about changing the world and for government leaders who look on technology as some kind of magic wand.

More than once in our discussions, Tallinn’s leaders restated an old idea that preceded the birth of computers: few things are harder to earn and easier to lose than trust.

© 2020 Norman Jacknis, All Rights Reserved

Bitcoin & The New Freedom Of Monetary Policy

Every developing technology has the potential for unintended consequences.  Blockchain technology is an example.  Although there are many possible uses of blockchain as a generally trusted and useful distributed approach to storing data, its most visible application has been virtual or crypto-currencies, such as Bitcoin, Ethereum and Litecoin. These once-obscure crypto-currencies are on a collision course with another trend that in its own way is based on technology — mostly digital government-issued money.

Although there are many possible uses of blockchain as a generally trusted and useful distributed approach to storing data, its most visible application has been virtual or crypto-currencies, such as Bitcoin, Ethereum and Litecoin. These once-obscure crypto-currencies are on a collision course with another trend that in its own way is based on technology — mostly digital government-issued money.

In particular, another once-obscure idea about government money is also moving more into the mainstream — modern monetary theory (MMT), which I mentioned few weeks ago in my reference to Stephanie Kelton’s new book, “The Deficit Myth”. In doing a bit of follow up on the subject, I came across many articles that were critical of MMT. Some were from mainstream economists. Many more were from advocates of crypto-currencies, especially Bitcoiners.

Although I doubt that Professor Kelton would agree, many Bitcoiners feel that governments have been using MMT since the 1970s — merely printing money. They forget about the tax and policy stances that Kelton advocates.

Moreover, there is a significant difference in the attitude of public leaders when they think they are printing money versus borrowing it from large, powerful financial interests. James Carville, chief political strategist and guru for President Clinton famously said, “I used to think that if there was reincarnation, I wanted to come back as the president or the pope or as a .400 baseball hitter. But now I would like to come back as the bond market. You can intimidate everybody.”

For Bitcoiners, the battle is drawn and they do not like MMT. Here is just a sample of the headlines from the last year or so:

It is worth noting that MMT raises very challenging issues of governance. Who decides how much currency to issue? Who decides when there is too much currency? Who decides what government-issued money is spent on and to whom it goes? This is especially relevant in the US, where the central bank, the Federal Reserve, is at least in theory independent from elected leaders.

However, it also gives the government what may be a necessary tool to keep the economy moving during recessions, especially major downturns. Would a future dominated by cryptocurrencies, like Bitcoin, essentially tie the hands of the government in the face of an economic crisis? — just as the gold standard did during the Panic of 1893 and the Great Depression (until President Roosevelt suspended the convertibility of dollars into gold)?

This picture shows MMT as a faucet controlling the flow of money as the needs of the economy changes. If this were a picture of Bitcoin’s role, the faucet would be almost frozen, dripping a relatively fixed amount that is dependent upon Bitcoin mining.

Less often discussed is that cryptocurrencies, as a practical matter, also end up needing some governance. I am not going to get into the weeds on this, but you can start with “In Defense of Szabo’s Law, For a (Mostly) Non-Legal Crypto System”. The implication is that cryptocurrencies need some kind of rules and laws enforced by some people. Sounds like at least a little bit of government to me.

Putting that aside, if Bitcoin and/or other cryptocurrencies succeed in getting widespread adoption, then it would seem that they would limit the ability of governments to encourage or discourage economic growth through the issuance of money.

Of course, some officials do not seem to worry too much. This attitude is summed up in a European Parliament report, published in 2018.

Decentralised ledger technology has enabled cryptocurrencies to become a new form of money that is privately-issued, digital and that permits peer-to-peer transactions. However, the current volume of transactions in such cryptocurrencies is still too small to make them serious contenders to replace official currencies. 

Underlying this are two factors. First, cryptocurrencies do not perform the role of money well, because their value is very volatile and they are thus not very good stores of value. Second, cryptocurrencies are managed in ways that are very primitive compared to what modern currencies require.

These shortcomings might be corrected in the future to increase the popularity and reach of cryptocurrencies. However, those that manage currencies, in other words monetary policymakers, cannot be outside any societal system of checks and balances.

For cryptocurrencies to replace official money, they would have to conform to the institutional set up that monitors and evaluates those who have the power to manage money.

They do not seem to be too worried, do they? However, cryptocurrency might eventually derail the newfound freedom that government economic policy makers have realized they have through MMT.

As we have seen in the past, new technologies can suddenly grow very fast and blindside public officials. As Roy Amara, past president of The Institute for the Future, said, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”.

© 2020 Norman Jacknis, All Rights Reserved

Are Computers Learning On Their Own?

To many people, the current applications of artificial intelligence, like your car being able to detect where a lane ends, seem magical. Although most of the significant advances in AI have been in supervised learning, it is the idea that the computer is making sense of the world on its own — unsupervised — which intrigues people more.

If you’ve read some about artificial intelligence, you may often see a distinction between supervised and unsupervised learning by machine. (There are other categories too, but these are the big two.)

In supervised learning, the machine is taught by humans what is right or wrong — for example, who did or did not default on a loan — and it eventually figures out what characteristics would best predict a default.

Another example is asking the computer to identify whether a picture shows a dog or a cat. In supervised learning, a person identifies each picture and then the computer figures out the best way to distinguish between each — perhaps whether the animal has floppy ears 😉

Even though the machine gets quite good at correctly doing this, the underlying model of the things that predict these results is also often opaque. Indeed, one of the hot issues in analytics and machine learning these days is how humans can uncover and almost “reverse engineer” the model the machine is using.

In unsupervised learning, the computer has to figure out for itself how to divide a group of pictures or events or whatever into various categories. Then the next step is for the human to figure out what those categories mean. Since it is subject to interpretation, there is no truly accurate and useful way to describe the categories, although people try. That’s how we get psychographic categories in marketing or equivalent labels, like “soccer moms”.

Sometimes the results are easy for humans to figure out, but not exactly earth shattering, like in this cartoon.

https://twitter.com/athena_schools/status/1063013435779223553

In the case of the computer that is given a set of pictures of cats and dogs to determine what might be the distinguishing characteristics, we (people) would hope that computer would figure out that there are dogs and cats. But it might instead classify them based on size — small animals and big animals — or based on the colors of the animals.

This is all sounds like it is unsupervised. Anything useful that the computer determines is thus part of the magic.

How Unsupervised Is Unsupervised Machine Learning?

Except, in some of the techniques of unsupervised learning, especially in cluster analysis, a person is asked to determine how many clusters or groups there might be. This too limits and supervises the learning by the machine. (Think about how much easier it is to be right in Who Wants To Be A Millionaire if the contestant can narrow down the choices to two.)

Even more important, the computer can only learn from the data that it is given. It would have problems if pictures of a bunch of elephants or firetrucks were later thrown into the mix. Thus, the human being is at least partially supervising the learning and certainly limiting it.  The machine’s model is subject to the limitations and biases of the data that it learned on.

Truly unsupervised learning would occur the way that it does for children. They are let out to observe the world and learn patterns, often without any direct assistance from anyone else. Even with over-scheduling by helicopter parents, children can often freely roam the earth and discover new data and experiences.

Similarly, to have true unsupervised learning of machines, they would have to be able to travel and process the data they see.

At the beginning of his book Life 3.0, Max Tegmark weaves a sci fi tale about a team that built an AI called Prometheus. While it wasn’t directly focused on unsupervised classification, Prometheus was unsupervised and learned on its own. It eventually learned enough to dominate all mankind. But even in this fantasy world, its unsupervised escape only enabled the AI machine to roam the internet, which is not quite the same thing as real life after all.

It is likely, for a while longer, that a significant portion of human behavior will occur outside of the internet 🙂

(And, as we saw with Microsoft’s chatbot Tay, an AI can also learn some unfortunate and incorrect things on the open internet.)

While not quite letting robots roam free in the real world, researchers at Stanford University’s Vision and Learning Lab “have developed iGibson, a realistic, large-scale, and interactive virtual environment within which a robot model can explore, navigate, and perform tasks.” (More about this at A Simulated Playground for Robots)

https://time.com/3983475/hitchbot-assault-video-footage/

There was HitchBOT a few years ago which traveled around the US, although I don’t think that it added to its knowledge along the way, and it eventually met up with some nasty humans. (For more see here and here.)

 

Perhaps self-driving cars or walking robots will eventually be able to see the world freely as we do. Ford Motor Company’s proposed delivery robot roams around, but it is not really equipped for learning. The traveling, learning machine will likely require a lot more computing power and time than we currently use in machine learning.

Of course, there is also work on the computing part of problem, as this July 21st headline shows, “Machines Can Learn Unsupervised ‘At Speed Of Light’ After AI Breakthrough, Scientists Say.” But that’s only the computing part of the problem and not the roaming around the world part.

These more recent projects are evidence that the AI researchers realize their models are not being built in a truly unsupervised way. Despite the hoped-for progress of these projects, for now, that is why data scientists need to be careful how they train and supervise a machine even in unsupervised learning mode.

© 2020 Norman Jacknis, All Rights Reserved

Why Being Virtually There Is Virtually There

If you work in a factory or somewhere else that requires you to touch things or people, the COVID shutdowns and social distancing have clearly been a difficult situation to overcome.

But it seems that the past few months have also been very trying for many people who worked in office settings before COVID set in.  The Brady Bunch meme captured this well.  However, to me, that’s something which is less a reflection of reality than a lack of imagination and experience.

I’m in the minority of folks who have worked remotely for more than ten years.  By now, I’ve forgotten some of the initial hiccups in doing that.  Also, the software, hardware and bandwidth have gotten so much better that the experience is dramatically better than when I started.

So, I’m a little flummoxed by some of what I hear from remote working newbies.  First off, of course, is the complaint that people can’t touch and hug their co-workers anymore.  Haven’t they been to training about inappropriate touching and how some of these physical interactions can come off as harassment?  Even if these folks were in the office, I doubt they would really be going around making physical contact with co-workers.

Then there is the complaint about the how much can be missed in communication when conversations are limited to text messages and emails.  That complaint is correct.  But why is there an assumption that communication is limited to text.  If you had a meeting in a conference room or went to someone’s office for a talk, why can’t you do the same thing via videoconference?

(My own experience is that remote work requires video to be successful because of the importance of non-text elements of human communication.  That’s why I’m assuming that the virtual communication is often via video.)

In the office you could drop by.  Users of Zoom and similar programs are often expected to schedule meetings, but that’s not a requirement.  You can turn on Zoom and, just like in an office, others could connect to you when you want.  They’ll see if your busy.  And, if you’re a really important person, you can set up a waiting room and let them in when you’re ready.

There is even a 21st century version of the 19th century partner desks, although it’s not new.  An example is the always-on Kubi, pictured to the left, that has been around for a few years.

Perch, another startup, summarized the idea in this video a few years back.  Foursquare started using a video portal connecting their engineering teams on the two coasts eight years ago.  (A few months ago before COVID, a deal was reached to merge Foursquare with Factual.)

By the way, the physical office was no utopia of employee interaction.  A variety of studies, most famously the Allen Curve, a very large reduction in interaction if employees were even relatively short physical distances from each other.  With video, all your co-workers are just a click away.  While your interactions with the colleague at the next desk may be less (if you want), your interactions with lots of other colleagues on other floors can happen a lot more easily.

And then, despite evidence of increased productivity and employee happiness with remote work, there is the statement that it decreases innovation and collaboration.

Influential articles, like Workspaces That Move People in the October 2014 issue of the Harvard Business Review, declared that “chance encounters and interactions between knowledge workers improve performance.”

In the physical world, many companies interpreted this as a mandate for open office plans that removed doors and closed offices.  So how did that work out?

According to a later article – The Truth About Open Offices – in the November–December 2019 issue of the Harvard Business Review reported that, “when the firms switched to open offices, face-to-face interactions fell by 70%”.    (More detail can be found in Royal Society journal article of  July 2018 on “The impact of the ‘open’ workspace on human collaboration”.

The late Steve Jobs forcefully pushed the idea of serendipity through casual, random encounters of employees.  That idea was one of the design principles of the new Apple headquarters.  Now with COVID-driven remote work, some writers, like Tiernan Ray in ZDNET on June 24, 2020, are asking “Steve Jobs said Silicon Valley needs serendipity, but is it even possible in a Zoom world?”.

There is nothing inherently in video conferencing that diminishes serendipitous meetings.  Indeed, in the non-business world, there are websites that exist solely to connect strangers together completely at random, like Chatroulette and Omegle.

Without going into the problems those sites have had with inappropriate behavior, the same idea could be used in a different way to periodically connect via video conferencing two employees who otherwise haven’t met recently or at all.  Nor does that have to be completely random.  A company doing this could also use some analytics to determine which employees might be interested in talking with other employees that they haven’t connected with recently.  That would ensure serendipity globally, not just limited to the people who work in the same building.

It’s not that video conferencing is perfect, but there is still an underappreciation of how many virtual equivalents there are of typical office activities – and even less appreciation for some of the benefits of virtual connections compared to physical offices.

To me, the issue is one of a lag that I’ve seen before with technology.  I’ve called this horseless carriage thinking.  Sociologists call it a cultural lag.  As Ashley Crossman has written, this is

“what happens in a social system when the ideals that regulate life do not keep pace with other changes which are often — but not always — technological.”

Some people don’t yet realize and aren’t quite comfortable with what they can do.  For most, time and experience will educate them.

© 2020 Norman Jacknis, All Rights Reserved

A Budget That Copes With Reality

Five years ago, I wrote about the possibility of dynamic budgeting.  I was reminded of this again recently after reading Stephanie Kelton’s eye-opening new book, “The Deficit Myth”.

Her argument is that, since the U.S. dropped the gold standard and fixed exchange rates, it can create as much money as it wants.  The limit is not an illusory national debt number, but inflation.  And in an economy with less than full employment, inflation is not now an issue.  Her explanation of the capacity of the Federal government to spend leads to her suggestions for a more flexible approach to dealing with major economic and social issues.

Although Dr. Kelton was the former staff director for the Democrats on the Senate Budget Committee, she doesn’t devote many words to the tools used in budgeting.  However, the argument that she makes reminds me again that the traditional budget itself has to change, especially shifting to a dynamic budget.

While states and localities are not in the same position as the Federal government, they also face unpredictable conditions and could benefit from a more flexible, dynamic budget.  Of course, in the face of COVID and economic retraction the necessity of re-allocating funds has become more obvious.

In an earlier blog, I wrote about a simple tax app that is now feasible and also eliminates the bumps in incentives that are caused by our current, old-fashioned tax bracket scheme.   This was not using some untested, cutting-edge technology.  Instead, the solution could use phones, tablets and laptops doing simple calculations that these devices have done for decades.

Similarly, what is now well-established technology could be used to overcome the problems with traditional fixed budgeting.  (By the way, the same applies to the budgets that corporations devise.)

So, what are the problems that everyone knows exist with budgets?

  1. They’re wrong the day they are approved since they are trying to predict precisely a future that cannot be known precisely ahead of time. This error is made worse by the early deadlines in the typical budget process.  If you run a department, you are likely to be asked by the budget office to prepare estimates for what you’ll need in a period that will go as far as 18 or even 24 months into the future.
  2. It’s not clear how the estimates are derived. Typically, there are no underlying rules or models, just the addition of personnel and other basic costs that are adjusted from the last year.  This is despite the fact that some things are fairly well known.  For example, it is fairly straightforward to estimate the cost of paying unemployment to an average individual.  What is harder is to figure out how many unemployed people there will be – and, of course, you need to know the total number of unemployed and the average cost in order to compute the total amount of money needed.
  3. Given these problems, in practice during any given budget year, all kinds of exceptions and deviations occur in the face of reality. But the rest of the budget is not readjusted, although the budget staff will often hold back money that was approved as it takes from “Peter to pay Paul”.  The process often seems and is very arbitrary.

Operating in the real world, of course, requires continual adjustments.  Such adjustments can best be accommodated if the traditional fixed budget was replaced by a dynamic budget at the start of the budget process.

One way of doing this is familiar to almost every reader of this blog – the spreadsheet.  The cells in spreadsheets don’t always have hard fixed numbers, like fixed budgets.  Instead many of those spreadsheets have formulas.

And Congress could also not so much the individual amounts for each agency or program, but their relative priorities under different scenarios.  Thus, in a recession there would be a need for more unemployment insurance funding, but that would recede in the face of other priorities if the economy is booming.

To go back to the unemployment example, the actual amount needed in the budget will change as we get closer to the month being estimated and can be more accurate in its estimates of the number of people who will be unemployed.

Of course, the reader who knows my background won’t be surprised that I think the formulas in these cells could be derived by the use of some smart analytics and machine learning.  Ultimately, these methods could be enhanced with simulations – after all, what is a budget but an attempt to simulate a future period of financial needs?

More on that in another post sometime in the future.

© 2020 Norman Jacknis, All Rights Reserved

Words Matter In Building Intelligent Communities

The Intelligent Community Forum (ICF) is an international group of city, town and regional leaders as well as scholars and other experts who are focused on quality of life for residents and intelligently responding to the challenges and opportunities provided by a world and an economy that is increasingly based on broadband and technology.

To quote from their website: “The Intelligent Community Forum is a global network of cities and regions with a think tank at its center.  Its mission is to help communities in the digital age find a new path to economic development and community growth – one that creates inclusive prosperity, tackles social challenges, and enriches quality of life.”

Since 1999, ICF has held an annual contest and announced an award to intelligent communities that go through an extensive investigation and comparison to see how well they are achieving these goals.  Of hundreds of applications, some are selected for an initial, more in-depth assessment and become semi-finalists in a group called the Smart21.

Then the Smart21 are culled to a smaller list of the Top7 most intelligent communities in the world each year.  There are rigorous quantitative evaluations conducted by an outside consultancy, field trips, a review by an independent panel of leading experts/academic researchers and a vote by a larger group of experts.

An especially important part of the selection of the Top7 from the Smart21 is an independent panel’s assessment of the projects and initiatives that justify a community’s claim to being intelligent.

It may not always be clear to communities what separates these seven most intelligent communities from the rest.  After all, these descriptions are just words.  We understand that words matter in political campaigns.  But words matter outside of politics in initiatives, big and small, that are part of governing.

Could the words that leaders use be part of what separates successful intelligent initiatives from those of others who are less successful in building intelligent communities?

In an attempt to answer that question, I obtained and analyzed the applications submitted over the last ten years.  Then, using the methods of analytics and machine learning that I teach at Columbia University, I sought to determine if there was a difference in how the leaders of the Top7 described what they were doing in comparison with those who did not make the cut.

Although at a superficial level, the descriptions seem somewhat similar, it turns out that the leaders of more successful intelligent community initiatives did, indeed, describe those initiatives differently from the leaders of less successful initiatives.

The first significant difference was that the descriptions of the Top7 had more to say about their initiatives, since apparently they had more accomplishments to discuss.  Their descriptions had less talk about future plans and more about past successes.

In describing the results of their initiatives so far, they used numbers more often, providing greater evidence of those results.  Even though they were discussing technology-based or otherwise sometimes complex projects, they used more informal, less dense and less bureaucratic language.

Among the topics they emphasized, engagement and leadership as well as the technology infrastructure primarily stood out.  Less important, but also a differentiation, the more successful leaders emphasized the smart city, innovation and economic growth benefits.

For those leaders who wish to know what will gain them recognition for real successes in transforming their jurisdictions into intelligent communities, the results would indicate these simple rules:

  • Have and highlight a solid technology infrastructure.
  • True success, however, comes from extensive civic engagement and frequently mentioning that engagement and the role of civic leadership in moving the community forward.
  • Less bureaucratic formality and more stress on results (quantitative measures of outcomes) in their public statements is also associated with greater success in these initiatives.

On the other hand, a laundry list of projects that are not tied to civic engagement and necessary technology, particularly if those projects have no real track record, is not the path to outstanding success – even if they check off the six wide-ranging factors that the ICF expects of intelligent communities.

While words do matter, it is also true that other factors can impact the success or failure of major public initiatives.  However, these too can be added into the models of success or failure, along with the results of the textual analytics.

Overall, the results of this analysis can help public officials understand a little better how they need to think about what they are doing and then properly describe it to their citizens and others outside of their community.  This will help them to be more successful, most importantly for their communities and, if they wish, as well in the ICF awards process.

© 2020 Norman Jacknis, All Rights Reserved

Is It 1832 Or 2020? Virtual Convention Or Something New?

In these blogs, I’ve often noted how people seem wedded to old ways of thinking, even when those old ways are dressed up in new clothes.

Despite all the technology around us, it’s amazing how little some things have changed.  Too often, today seems like it was 120 years ago when people talked and thought about “horseless carriages” rather than the new thing that was possible – the car with all the possibilities it opened.

So it was with interest that I read this recent story – “Democrats confirm plans for nearly all-virtual convention

“Democrats will hold an almost entirely virtual presidential nominating convention Aug. 17-20 in Milwaukee using live broadcasts and online streaming, party officials said Wednesday.”

Party conventions have been around since 1832.  They were changed a little bit when they went on radio and then later on television.  But mostly they have always been filled with lots of people hearing speeches, usually from the podium.

Following in this tradition going back to 1832, the Democratic Party is going to have a convention, but we can’t have lots of people gathered together with COVID-19.  This one will be “a virtual convention in Milwaukee” which seems like a contradiction – something that is both virtual but is happening in a physical place?  I guess it only means that Joe Biden will be in Milwaukee along with the convention officials to handle procedures.

Indeed, it’s not entirely clear what this convention will look like.  In addition to the main procedures in Milwaukee, the article indicates that “Democrats plan other events in satellite locations around the country to broadcast as part of the convention”.  I assume that will be similar.

“Kirshner knows how it’s done: He has produced every Democratic national convention since 1992.”

Hopefully this will be different from every convention since 1832 – or even 1992!

Instead of the standard speeches on the screen or even other activities that are just video of something that could occur on-stage, do something that is more up-to-date.  This will show that Biden will not only be a different kind of President than Trump, but that he also will know how to lead us into the future.

Why not do something that takes advantage of not having to be in a convention hall?

For example, how about a walk (or drive, if necessary) through the speaker’s neighborhood (masks on) explaining what the problems are and what Biden wants to do about those problems?

My suggestions are limited since creative arts are not my specialty, but I do see an opportunity to do something different.  It is a good guess that Hollywood is also eager to help defeat Trump and would offer all kinds of innovative assistance.  Make it an illustration of American collaboration at its best.

This should not be an unusual idea for the Biden organization.  Among his top advisors are Zeppa Kreager, his Chief of Staff, formerly the Director of the Creative Alliance (part of Civic Nation), and Kate Bedingfield, Deputy Campaign Manager and Communications Director, formerly Vice President at Monumental Sports and Entertainment.

Of course, the Trump campaign could take the same approach, but they do not seem interested and Trump obviously adores a large in-person audience.  So there is a real opportunity for Biden to differentiate himself.

Beyond the short-term electoral considerations, this would also make political history by setting a new pattern for political conventions.

© 2020 Norman Jacknis, All Rights Reserved

Trump And Cuomo COVID-19 Press Conferences

Like many other people who have been watching the COVID-19 press conferences held by Trump and Cuomo, I came away with a very different feeling from each.  Beyond the obvious policy and partisan differences, I felt there is something more going on.

Coincidentally, I’ve been doing some research on text analytics/natural language processing on a different topic.  So, I decided to use these same research tools on the transcripts of their press conferences from April 9 through April 16, 2020.  (Thank you to the folks at Rev.com for making available these transcripts.)

One of the best approaches is known by its initials, LIWC, and was created some time ago by Pennebaker and colleagues to assess especially the psycho-social dimensions of texts.   It’s worth noting that this assessment is based purely on the text – their words – and doesn’t include non-verbal communications, like body language.

While there were some unsurprising results to people familiar with both Trump and Cuomo, there are also some interesting nuances in the words they used.

Here are the most significant contrasts:

  • The most dramatic distinction between the two had to do with emotional tone. Trump’s words had almost twice the emotional content of Cuomo’s, including words like “nice”, although maybe the use of that word maybe should not be taken at face value.
  • Trump also spoke of rewards/benefits and money about 50% more often than Cuomo.
  • Trump emphasized allies and friends about twenty percent more often than Cuomo.
  • Cuomo used words that evoked health, anxiety/pain, home and family two to three times more often than Trump.
  • Cuomo asked more than twice as many questions, although some of these could be sort of rhetorical – like “what do you think?”
  • However, Trump was 50% more tentative in his declarations than Cuomo, whereas Cuomo had greater expressions of certainty than Trump.
  • While both men spoke about the present tense much more than the future, Cuomo’s use of the present was greater than Trump’s. On the other hand, Trump’s use of the future tense and the past tense was greater than Cuomo’s.
  • Trump used “we” a little more often than Cuomo and much more than he used “you”. Cuomo used “you” between two and three times more often than Trump.  Trump’s use of “they” even surpassed his use of you.

Distinctions of this kind are never crystal clear, even with sophisticated text analytics and machine learning algorithms.  The ambiguity of human speech is not just a problem for machines, but also for people communicating with each other.

But these comparisons from text analytics do provide some semantic evidence for the comments by non-partisan observers that Cuomo seems more in command.  This may be because the features of his talks would seem to better fit the movie portrayal and the average American’s idea of leadership in a crisis – calm, compassionate, focused on the task at hand.

© 2020 Norman Jacknis, All Rights Reserved

Robots Just Want To Have Fun!

There are dozens of novels about dystopic robots – our future “overlords” as as they are portrayed.

In the news, there are many stories about robots and artificial intelligence that focus on important business tasks. Those are the tasks that have peopled worried about their future employment prospects. But that stuff is pretty boring if it’s not your own field.

Anyway, while we are only beginning to try to understand the implications of artificial intelligence and robotics, robots are developing rapidly and going beyond those traditional tasks.

Robots are also showing off their fun and increasingly creative side.

Welcome to the age of the “all singing, all dancing” robot. Let’s look at some examples.

Dancing

Last August, there was a massive robot dance in Guangzhou, China. It achieved a Guinness World Record for for the “most robots dancing simultaneously”. See https://www.youtube.com/watch?v=ouZb_Yb6HPg or http://money.cnn.com/video/technology/future/2017/08/22/dancing-robots-world-record-china.cnnmoney/index.html

Not to be outdone, at the Consumer Electronics Show in Las Vegas, a strip club had a demonstration of robots doing pole dancing. The current staff don’t really have to worry about their jobs just yet, as you can see at https://www.youtube.com/watch?v=EdNQ95nINdc

Music

Jukedeck, a London startup/research project, has been using AI to produce music for a couple of years.

The Flow Machines project in Europe has also been using AI to create music in the style of more famous composers. See, for instance, its DeepBach, “a deep learning tool for automatic generation of chorales in Bach’s style”. https://www.youtube.com/watch?time_continue=2&v=QiBM7-5hA6o

Singing

Then there’s Sophia, Hanson Robotics famous humanoid. While there is controversy about how much intelligence Sophia has – see, for example, this critique from earlier this year – she is nothing if not entertaining. So, the world was treated to Sophia singing at a festival three months ago – https://www.youtube.com/watch?v=cu0hIQfBM-w#t=3m44s

Also, last August, there was a song composed by AI, although sung by a human – https://www.youtube.com/watch?v=XUs6CznN8pw&feature=youtu.be

There is even AI that will generate poetry – um, song lyrics.

Marjan Ghazvininejad, Xing Shi, Yejin Choi and Kevin Knight of USC and the University of Washington wrote Hafez and began Generating Topical Poetry on a requested subject, like this one called “Bipolar Disorder”:

Existence enters your entire nation.
A twisted mind reveals becoming manic,
An endless modern ending medication,
Another rotten soul becomes dynamic.

Or under pressure on genetic tests.
Surrounded by controlling my depression,
And only human torture never rests,
Or maybe you expect an easy lesson.

Or something from the cancer heart disease,
And I consider you a friend of mine.
Without a little sign of judgement please,
Deliver me across the borderline.

An altered state of manic episodes,
A journey through the long and winding roads.

Not exactly upbeat, but you could well imagine this being a song too.

Finally, there is even the HRP-4C (Miim), which has been under development in Japan for years. Here’s her act –  https://www.youtube.com/watch?v=QCuh1pPMvM4#t=3m25s

All singing, all dancing, indeed!

© 2018 Norman Jacknis, All Rights Reserved

More Than A Smart City?

The huge Smart Cities New York 2018 conference started today. It is billed as:

“North America’s leading global conference to address and highlight critical solution-based issues that cities are facing as we move into the 21st century. … SCNY brings together top thought leaders and senior members of the private and public sector to discuss investments in physical and digital infrastructure, health, education, sustainability, security, mobility, workforce development, to ensure there is an increased quality of life for all citizens as we move into the Fourth Industrial Revolution.”

A few hours ago, I helped run an Intelligent Community Forum Workshop on “Future-Proofing Beyond Tech: Community-Based Solutions”. I also spoke there about “Technology That Matters”, which this post will quickly review.

As with so much of ICF’s work, the key question for this part of the workshop was: Once you’ve laid down the basic technology of broadband and your residents are connected, what are the next steps to make a difference in residents’ lives?

I have previously focused on the need for cities to encourage their residents to take advantage of the global opportunities in business, education, health, etc. that becomes possible when you are connected to the whole world.

Instead in this session, I discussed six steps that are more local.

1. Apps For Urban Life

This is the simplest first step and many cities have encouraged local or not-so-local entrepreneurs to create apps for their residents.

But many cities that are not as large as New York are still waiting for those apps. I gave the example of Buenos Aires as a city that didn’t wait and built more than a dozen of its own apps.

I also reminded attendees that there are many potential, useful apps for their residents which cannot justify enough profit to be of interest to the private sector, so the government will have to create these apps on their own.

2. Community Generation Of Urban Data

While some cities have posted their open data, there is much data about urban life that the residents can collect. The most popular example is the community generation of environmental data, with such products like the Egg, the Smart Citizen Kit for Urban Sensing, the Sensor Umbrella and even more sophisticated tools like Placemeter.

But the data doesn’t just have to be about the physical environment. The US National Archives has been quite successful in getting citizen volunteers to generate data – and meta-data – about the documents in its custody.

The attitude which urban leaders need is best summarized by Professor Michael Batty of the University College London:

“Thinking of cities not as smart but as a key information processor is a good analogy and worth exploiting a lot, thus reflecting the great transition we are living through from a world built around energy to one built around information.”

3. The Community Helps Make Sense Of The Data

Once the data has been collected, someone needs to help make sense of it. This effort too can draw upon the diverse skills in the city. Platforms like Zooniverse, with more than a million volunteers, are good examples of what is called citizen science. For the last few years, there has been OpenData Day around the world, in which cities make available their data for analysis and use by techies. But I would go further and describe this effort as “popular analytics” – the virtual collaboration of both government specialists and residents to better understand the problems and patterns of their city.

4. Co-Creating Policy

Once the problems and opportunities are better understood, it is time to create urban policies in response.  With the foundation of good connectivity, it becomes possible for citizens to conveniently participate in the co-creation of policy. I highlighted examples from the citizen consultations in Lambeth, England to those in Taiwan, as well as the even more ambitious CrowdLaw project that is housed not far from the Smart Cities conference location.

5. Co-Production Of Services

Then next is the execution of policy. As I’ve written before, public services do not necessarily always have to be delivered by paid civil servants (or even better paid companies with government contracts). The residents of a city can help be co-producers of services, as exemplified in Scotland and New Zealand.

6. Co-Creation Of The City Itself

Obviously, the people who build buildings or even tend to gardens in cities have always had a role in defining the physical nature of a city. What’s different in a city that has good connectivity is the explosion of possible ways that people can modify and enhance that traditional physical environment. Beyond even augmented reality, new spaces that blend the physical and digital can be created anywhere – on sidewalks, walls, even in water spray. And the residents can interact and modify these spaces. In that way, the residents are constantly co-creating and recreating the urban environment.

The hope of ICF is that the attendees at Smart Cities New York start moving beyond the base notion of a smart city to the more impactful idea of an intelligent city that uses all the new technologies to enhance the quality of life and engagement of its residents.

© 2018 Norman Jacknis, All Rights Reserved

When Strategic Thinking Needs A Refresh

This year I created a new, week-long, all-day course at Columbia University on Strategy and Analytics. The course focuses on how to think about strategy both for the organization as a whole as well as the analytics team. It also shows the ways that analytics can help determine the best strategy and assess how well that strategy is succeeding.

In designing the course, it was apparent that much of the established literature in strategy is based on ideas developed decades ago. Michael Porter, for example, is still the source of much thinking and teaching about strategy and competition.

Perhaps a dollop of Christensen’s disruptive innovation might be added into the mix, although that idea is not any longer new. Worse, the concept has become so popularly diluted that too often every change is mistakenly treated as disruptive.

Even the somewhat alternative perspective described in the book “Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant” is now more than ten years old.

Of the well-established business “gurus”, perhaps only Gary Hamel has adjusted his perspective in this century – see, for example, this presentation.

But the world has changed. Certainly, the growth of huge Internet-based companies has highlighted strategies that do not necessarily come out of the older ideas.

So, who are the new strategists worthy of inclusion in a graduate course in 2018?

The students were exposed to the work of fellow faculty at Columbia University, especially Leonard Sherman’s “If You’re in a Dogfight, Become a Cat! – Strategies for Long-Term Growth” and Rita Gunther McGrath’s “The End Of Competitive Advantage: How To Keep Your Strategy Moving As Fast As Your Business”.

But in this post, the emphasis in on strategic lessons drawn from this century’s business experience with the Internet, including multi-sided platforms and digital content traps. For that there is “Matchmakers – the new economics of multisided platforms” by David S Evans and Richard Schmalensee. And also Bharat Anand’s “The Content Trap: A Strategist’s Guide to Digital Change”.

For Porter and other earlier thinkers, the focus was mostly on the other players that they were competing against (or decided not to compete against). For Anand, the role of the customer and the network of customers becomes more central in determining strategy. For Evans and Schmalensee, getting a network of customers to succeed is not simple and requires a different kind of strategic framework than industrial competition.

Why emphasize these two books? It might seem that these books only focus on digital businesses, not the traditional manufacturers, retailers and service companies that previous strategists worked at.

But many now argue that all businesses are digital, just to varying degrees. For the last few year we’ve seen the repeated headline that “every business is now a digital business” (or some minor variation) from Forbes, Accenture, the Wharton School of the University of Pennsylvania, among others you may not have heard of. And about a year ago, we read that “Ford abruptly replaces CEO to target digital transformation”.

Consider then the case of GE, one of the USA’s great industrial giants, which offers a good illustration of the situation facing many companies. A couple of years ago, it expressed its desire to “Become a Digital Industrial Company”. Last week, Steve Lohr of the New York Times reported that “G.E. Makes a Sharp ‘Pivot’ on Digital” because of its difficulty making the transition to digital and especially making the transition a marketing success.

At least in part, the company’s lack of success could be blamed on its failure to fully embrace the intellectual shift from older strategic frameworks to the more digital 21st century strategy that thinkers like Anand, Evans and Schmalensee describe.

© 2018 Norman Jacknis, All Rights Reserved

Are Any Small Towns Flourishing?

We hear and read how the very largest cities are growing, attractive places for millennials and just about anyone who is not of retirement age. The story is that the big cities have had almost all the economic gains of the last decade or so, while the economic life has been sucked out of small towns and rural areas.

The images above are what seem to be in many minds today — the vibrant big city versus the dying countryside.

Yet, we are in a digital age when everyone is connected to everyone else on the globe, thanks to the Internet. Why hasn’t this theory of economic potential from the Internet been true for the countryside?

Well, it turns out that it is true. Those rural areas that do in fact have widespread access to the Internet are flourishing. These towns with broadband are exemplary, but unfortunately not the majority of towns.

Professor Roberto Gallardo of Purdue’s Purdue Center for Regional Development has dug deep into the data about broadband and growth. The results have recently been published in an article that Robert Bell and I helped write. You can see it below.

So, the implication of the image above is half right — this is a life-or-death issue for many small towns. The hopeful note is that those with broadband and the wisdom to use it for quality of life will not die in this century.

© 2018 Norman Jacknis, All Rights Reserved


[This article is republished from the Daily Yonder , a non-profit media organization that specializes in rural trends and thus filling the vacuum of news coverage about the countryside.]

When It Comes to Broadband, Millennials Vote with Their Feet

By Roberto Gallardo — Robert Bell — Norman Jacknis

April 11, 2018

When they live in remote rural areas, millennials are more likely to reside in a county that has better digital access. The findings could indicate that the digital economy is helping decentralize the economy, not just clustering economic change in the cities that are already the largest.

Sources: USDA; Pew Research; US Census Bureau; Purdue Center for Regional Development This graph shows that the number of Millennials and Gen Xers living in the nation’s most rural counties is on the increase in counties with a low “digital divide index.” The graph splits the population in “noncore” (or rural) counties into three different generations. Then, within each generation, the graph looks at population change based on the Digital Divide Index. The index measures the digital divide using two sets of criteria, one that looks at the availability and adoption of broadband and another set that looks at socio-economic factors such as income and education levels that affect broadband use. Counties are split into five groups or quintiles based on the digital divide index, with group №1 (orange) having the most access and №5 (green) having the lowest.

Cities are the future and the countryside is doomed, as far as population growth, jobs, culture and lifestyle are concerned. Right?

Certainly, that is the mainstream view expressed by analysts at organizations such as Brookings. This type of analysis says the “clustering” of business that occurred during the industrial age will only accelerate as the digital economy takes hold. This argument says digital economies will only deepen and accelerate the competitive advantage that cities have always had in modern times.

But other pundits and researchers argue that the digital age will result in “decentralization” and a more level playing field between urban and rural. Digital technologies are insensitive to location and distance and potentially offer workers a much greater range of opportunities than ever before.

The real question is whether a rural decline is inevitable or if the digital economy has characteristics that are already starting to write a different story for rural America. We have recently completed research that suggests it is.

Millennial Trends

While metro areas still capture the majority of new jobs and population gains, there is some anecdotal evidence pointing in a different direction. Consider a CBS article that notes how, due to high housing costs, horrible traffic, and terrible work-life balances, Bend, Oregon, is seeing an influx of teleworkers from Silicon Valley. The New York Times has reported on the sudden influx of escapees from the Valley that is transforming Reno, Nevada — for good or ill, it is not yet clear.

Likewise, a Fortune article argued that “millennials are about to leave cities in droves” and the Telegraph mentioned “there is a great exodus going on from cities” in addition to Time magazine reporting that the millennial population of certain U.S. cities has peaked.

Why millennials? Well, dubbed the first digital-native generation, their migration patterns could indicate the beginning of a digital age-related decentralization.

An Age-Based Look at Population Patterns

In search of insight, we looked at population change among the three generations that make up the entire country’s workforce: millennials, generation X, and baby boomers.

First, we defined each generation. Table 1 shows the age ranges of each generation according to the Pew Research Center, both in 2010 and 2016, as well as the age categories used to measure each generation. While not an exact match, categories are consistent across years and geographies.

In addition to looking at generations, we used the Office of Management core-based typology to control by county type (metropolitan, small city [micropolitan], and rural [noncore]). To factor in the influence of digital access affects local economies, we used the Digital Divide Index. The DDI, developed by the Purdue Center for Regional Development, ranges from zero to 100. The higher the score, the higher the digital divide. There are two components to the Digital Divide Index: 1) broadband infrastructure/adoption and 2) socioeconomic characteristics known to affect technology adoption.

Looking at overall trends, it does look like the digital age is not having a decentralization effect. To the contrary, according to data from the economic modeling service Emsi, the U.S. added 19.4 million jobs between 2010 and 2016. Of these, 94.6 percent were located in metropolitan counties compared to only 1.6 percent in rural counties.

Population growth tells a similar story. Virtually the entire growth in U.S. population of 14.4 million between 2010 and 2016 occurred in metropolitan counties, according to the Census Bureau. The graph below (Figure 1) shows the total population change overall and by generation and county type. As expected, the number of baby boomers (far right side of the graph) is falling across all county types while millennials and generation x (middle two sets of bars) are growing only in metro counties.

But there is a different story. When looking at only rural counties (what the OMB classification system calls “noncore”) divided into five equal groups or quintiles based on their digital divide (1 = lowest divide while 5 = highest divide), the figure at the very top of this article shows that rural counties experienced an increase in millennials where the digital divide was lowest. (The millennial population grew by 2.3 percent in rural counties where the digital divide was the lowest.) Important to note is that this same pattern occurs in metropolitan and small city counties as well.

Impact on the “Really Rural” County

“Urban” and “rural” can be tricky terms when it comes to demographics. The Census Bureau reports that 80% of the population lives in urban areas. Seventy-five percent of those “urban” areas, however, are actually small towns with populations of under 20,000. They are often geographically large, with a population density that falls off rapidly once you leave the center of town.

On the other hand, some rural counties are adjacent to metro areas and may benefit disproportionately from their location or even be considered metropolitan due to their commuting patterns. Because of this, we turned to another typology developed by the U.S. Department of Agriculture Economic Research Service that groups counties into nine types ranging from large metro areas to medium size counties adjacent to metro areas to small counties not adjacent to metro areas.

Figure 3 (below) shows counties considered completely rural or with an urban population of less than 2,500, not adjacent to a metro area. Among these counties, about 420 in total, those with the lowest digital divide experienced a 13.5 percent increase in millennials between 2010 and 2016. In other words, in the nation’s “most rural” counties, the millennial population increased significantly when those counties had better broadband access.

Sources: USDA; Pew Research; US Census Bureau; Purdue Center for Regional Development. This graph shows population change by generation and “DDI” quintile in the nation’s most rural counties (rural counties that are farthest from metropolitan areas). In rural counties with the best digital access (a low digital divide index), the number of Millennials and Gen Xers increased.

The New Connected Countryside: A Work in Progress

To conclude, if you just look at overall numbers, our population seems to be behaving just like they did in the industrial age — moving to cities where jobs and people are concentrated. Rural areas that lag in broadband connectivity and digital literacy will continue to suffer from these old trends.

However, the digital age is young. Its full effects are still to be felt. Remember it took several decades for electricity or the automobile to revolutionize society. Besides, areas outside metro areas lag in broadband connectivity and digital literacy, limiting their potential to leverage the technology to affect their quality of life, potentially reversing migration trends.

Whether or not decentralization will take place remains to be seen. What is clear though is that (while other factors are having an impact, as well) any community attempting to retain or attract millennials need to address their digital divide, both in terms of broadband access and adoption/use.

In other words, our data analysis suggests that if a rural area has widely available and adopted broadband, it can start to successfully attract or retain millennials.

Roberto Gallardo is assistant director of the Purdue Center for Regional Development and a senior fellow at the Center for Rural Strategies, which publishes the Daily Yonder. Robert Bell is co-founder of the Intelligent Community Forum. Norman Jacknis is a senior fellow at the Intelligent Community Forum and on the faculty of Columbia University.

Too Many Unhelpful Search Results

This is a brief follow up to my last post about how librarians and artificial intelligence experts can
get us all beyond mere curation and our frustrations using web search

.

In their day-to-day Google searches many people end up frustrated. But they assume that the problem is their own lack of expertise in framing the search request.

In these days of advancing natural language algorithms that isn’t a very good explanation for users or a good excuse for Google.

We all have our own favorite examples, but here’s mine because it directly speaks to lost opportunities to use the Internet as a tool of economic development.

Imagine an Internet marketing expert who has an appointment with a local chemical engineering firm to make a pitch for her services and help them grow their business. Wanting to be prepared, she goes to Google with a simple search request: “marketing for chemical engineering firms”. Pretty simple, right?

Here’s what she’ll get:

She’s unlikely to live long enough to read all 43,100,000+ hits, never mind reading them before her meeting. And, aside from an ad on the right from a possible competitor, there’s not much in the list of non-advertising links that will help her understand the marketing issues facing a potential client.

This is not how the sum of all human knowledge – i.e., the Internet – is supposed to work. But it’s all too common.

This is the reason why, in a knowledge economy, I place such a great emphasis on deep organization, accessibility and relevance of information.

© 2017 Norman Jacknis, All Rights Reserved

Broadband Networks & NYC Subways

[This was originally published on June 20, 2011 and it was posted on a blog for government leaders, October 12, 2009.]

Many governments around the world are struggling to find the best method to get broadband networks created within their areas.  (Maybe it is the USA which is especially struggling.)

I thought about some historical precedents for major local infrastructure projects.  While the US Interstate Highway system is often cited as such a precedent, it falls short of representing the current debate because no one proposed in the 1950s that we should “let the private sector do it.”

But the huge New York City rail transit system is perhaps a better historical analogy.  It is important to note that the way the current system operates – as a single government owned and operated system – is not how it started or operated for many of its early years.

It seems that New York City government used every possible method including:

  • Let private companies own, build and run mass transit lines.  (Then take them over when they fail – due to underlying economic properties of such infrastructure which makes them more like public goods than private goods that can sustain a profit.)
  • Own the rights to the transit line yourself, but let a private company build and operate it.
  • Build the transit line yourself, but let a private company operate it.
  • Build the transit line and also run it.
  • Fake it – act as if a new transit line is going to be run and built by a private company, but do it yourself when no private company does so.

One other aspect of this history is of interest, which is the use of the “dual contracts.”  Those allowed more than one rail operator to use the same tracks and is analogous to the open network approach in today’s broadband world – whether the fiber backbone of broadband networks should be open to all users.

This opportunistic strategy perhaps made it easier and quicker for New York City to bring its great transit system to life.  Of course, eventually, this same lack of coherence created future problems and inefficiencies.  And by the time the great expansion of transit lines was finished, the government ended up owning and operating the whole system and sporadically filling some of the remaining unserved areas.

Was the trade-off of a fast growth opportunistic strategy against longer term problems worth it?  Given the success and the role that the subways have played in New York City’s development, the answer is likely yes.

I’ve combined excerpts from a couple different sources (especially the now ubiquitous Wikipedia) to highlight some aspects of that system’s history. …

———————–

History of the New York City Subway

The beginnings of the Subway came from various excursion railroads to Coney Island and elevated railroads in Manhattan and Brooklyn. At that time, New York County (Manhattan Island and part of the Bronx), Kings County (including the Cities of Brooklyn and Williamsburg) and Queens County were separate political entities.

In New York, competing steam-powered elevated railroads were built over major avenues. The first elevated line was constructed in 1867-70 by Charles Harvey and his West Side and Yonkers Patent Railway company along Greenwich Street and Ninth Avenue (although cable cars were the initial mode of transportation on that railway). Later more lines were built on Second, Third and Sixth Avenues. None of these structures remain today, but these lines later shared trackage with subway trains as part of the IRT system.

In Kings County [Brooklyn], elevated railroads were also built by several companies. These also later shared trackage with subway trains, and even operated into the subway, as part of the BRT and BMT. These lines were linked to Manhattan by various ferries and later the tracks along the Brooklyn Bridge (which originally had their own line, and were later integrated into the BRT/BMT).  Also in Kings County, six steam excursion railroads were built to various beaches in the southern part of the county; all but one eventually fell under BMT control.

In 1898, New York, Kings and Richmond Counties, and parts of Queens and Westchester Counties and their constituent cities, towns, villages and hamlets were consolidated into the City of Greater New York. During this era the expanded City of New York resolved that it wanted the core of future rapid transit to be underground subways, but realized that no private company was willing to put up the enormous capital required to build beneath the streets.

The City decided to issue rapid transit bonds outside of its regular bonded debt limit and build the subways itself, and contracted with the IRT (which by that time ran the elevated lines in Manhattan) to equip and operate the subways, sharing the profits with the City and guaranteeing a fixed five-cent fare.

The Interborough Rapid Transit (IRT) subway opened in 1904. The city contracted construction of the line to the IRT Company, ownership was always held by the city. The IRT built, equipped, and operated the line under a lease from the city. The IRT also leased the Manhattan Railway elevated lines in Manhattan and the Bronx for 999 years!

In Brooklyn, the various elevated railroads and many of the surface steam railroads, as well as most of the trolley lines, were consolidated under the BRT. Some improvements were made to these lines at company expense during this era.  Then the Brooklyn-Manhattan Transit (BMT, formerly the Brooklyn Rapid Transit, BRT) was the rapid transit company which built, bought, or assumed control of the Brooklyn elevated lines.

The BRT, which just barely entered Manhattan via the Brooklyn Bridge, wanted the opportunity to compete with the IRT, and the IRT wanted to extend its Brooklyn line to compete with the BRT. This led to the City’s agreeing to contract for future subways with both the BRT and IRT.  The expansion of rapid transit was greatly facilitated by the signing of the Dual Contracts in 1913. Finished mostly by 1920, some of the new lines had trains operated by both companies.

The majority of the present-day subway system was either built or improved under [four sequential] contracts to the IRT and BRT

The City, bolstered by political claims that the private companies were reaping profits at taxpayer expense, determined that it would build, equip and operate a new system itself, with private investment and without sharing the profits with private entities. This led to the building of the Independent City-Owned Subway (ICOS), sometimes called the Independent Subway System — that was not connected to the IRT or BMT lines. This system consisted of entirely subway construction with only one elevated portion.

As the first line neared completion, New York City offered it for private operation as a formality, knowing that no operator would meet its terms. Thus the city declared that it would operate it itself, formalizing a foregone conclusion. The first line opened without a formal ceremony..

Only two new lines were opened [later], the IRT Dyre Avenue Line (1941) and the IND Rockaway Line (1956). Both of these lines were rehabilitations of existing railroad rights-of-way rather than new construction.

In June 1940, the transportation assets of the former BMT and IRT systems were taken over by the City of New York for operation by the City’s Board of Transportation, which already operated the IND system.  After city takeover of the bankrupt BMT and IRT companies, many of the elevated lines were closed, and a slow “unification” took place, marked notably by establishment of several free transfer points between divisions in 1948 and a few points of through running between IND and BMT lines beginning in 1954.

A combination of factors had this takeover coincide with the end of the major rapid transit building eras in New York City. The City immediately began to eliminate what it considered redundancy in the system, closing several elevated lines.

[But] Because the early subway systems competed with each other, they tended to cover the same areas of the city, leading to much overlapping service. The amount of service has actually decreased since the 1940s as many elevated railways were torn down, and finding funding for underground replacements has proven difficult.

Despite the unification, a distinction between the three systems survives in the service labels: IRT lines (now referred to as A Division) have numbers and BMT/IND (now collectively B Division) lines use letters. There is also a more physical but less obvious difference: Division A cars are narrower than those of Division B by 18 inches (~45cm) and shorter by 9 to 24 feet (~2.7 to 7.3m).  An BMT/IND style train cannot fit into an IRT tunnel (the numbered lines and the 42nd Street Shuttle). An IRT train CAN fit into a BMT/IND tunnel but since it is narrower the distance from car to platform is unsafe. Cars from the IRT division are moved using BMT/IND tracks to Coney Island Overhaul Shops for major maintenance on a regular basis.  Division B equipment could operate on much of Division A if station platforms were trimmed and trackside furniture moved. Being able to do so would increase the capacity of Division A. However, there is virtually no chance of this happening because the portions of Division A that could not accommodate Division B equipment without major physical reconstruction are situated in such a way that it would be impossible to put together coherent through services.

© 2011 Norman Jacknis

Gold Mining

[Published 6/18/2011 and originally posted for government leaders, July 6, 2009]

My last posting was about the “goldmine” that exists in the information your government collects every day. It’s a goldmine because this data can be analyzed to determine how to save money by learning what policies and programs work best. Some governments have the internal skills to do this kind of sophisticated analysis or they can contract for those skills. But no government – not even the US Federal government – has the resources to analyze all the data they have.

What can you do about that? Maybe there’s an answer in a story about real gold mining from the authors of the book “Wikinomics”[1]:

A few years back, Toronto-based gold mining company Goldcorp was in trouble. Besieged by strikes, lingering debts, and an exceedingly high cost of production, the company had terminated mining operations…. [M]ost analysts assumed that the company’s fifty-year old mine in Red Lake, Ontario, was dying. Without evidence of substantial new gold deposits, Goldcorp was likely to fold. Chief Executive Officer Rob McEwen needed a miracle.

Frustrated that his in-house geologists couldn’t reliably estimate the value and location of the gold on his property … [he] published his geological data on the Web for all to see and challenged the world to do the prospecting. The “Goldcorp Challenge” made a total of $575,000 in prize money available to participants who submitted the best methods and estimates. Every scrap of information (some 400 megabytes worth) about the 55,000 acre property was revealed on Goldcorp’s Web site.

News of the contest spread quickly around the Internet and more than 1,000 virtual prospectors from 50 countries got busy crunching the data. Within weeks, submissions from around the world were flooding into Goldcorp headquarters. There were entries from graduate students, management consultants, mathematicians, military officers, and a virtual army of geologists. “We had applied math, advanced physics, intelligent systems, computer graphics, and organic solutions to inorganic problems. There were capabilities I had never seen before in the industry,” says McEwen. “When I saw the computer graphics, I almost fell out of my chair.”

The contestants identified 110 targets on the Red Lake property, more than 80% of which yielded substantial quantities of gold. In fact, since the challenge was initiated, an astounding 8 million ounces of gold have been found – worth well over $3 billion. Not a bad return on a half million dollar investment.

You probably won’t be able to offer a prize to analysts, although you might offer to share some of the savings that result from doing things better. But, since the public has an interest in seeing its government work better, unlike a private corporation, maybe you don’t have to offer a prize.And there are many examples on the Internet where people are willing to help out without any obvious monetary reward.

Certainly not everyone, but enough people might be interested in the data to take a shot of making sense of it – students or even college professors looking for research projects, retired statisticians, the kinds of folks who live to analyze baseball statistics, and anyone who might find this a challenge.

The Obama administration and its new IT leaders have made a big deal about putting its data on the Web. There are dozens of data sets on the Federal site data.gov[2], obviously taking care to deal with issues of individual privacy and national security. Although their primary interest is in transparency of government, now that the data is there, we’ll start to see what people out there learn from all that information. Alabama[3] and the District of Columbia, among others, have started to do the same thing.

You can benefit a lot more, if you too make your government’s data available on the web for analysis. Then your data, perhaps combined with the Federal data and other sources on the web, can provide you with an even better picture of how to improve your government – better than just using your own data alone.

  1. “Innovation in the Age of Mass Collaboration”, Business Week, Feb. 1, 2007 http://www.businessweek.com/innovate/content/feb2007/id20070201_774736.htm
  2. “Data.gov open for business”, Government Computer News, May 21, 2009, http://gcn.com/articles/2009/05/21/federal-data-website-goes-live.aspx
  3. “Alabama at your fingertips”, Government Computer News, April 20, 2009, http://gcn.com/articles/2009/04/20/arms-provides-data-maps-to-agencies.aspx

© 2011 Norman Jacknis

SmarterCape Summit Presentation

Originally published 5/20/2011

On May 10, I was the plenary keynote speaker at the SmarterCape Summit – the kickoff meeting for the $50 million + Open Cape project to bring broadband and its applications to Cape Cod and southeastern Massachusetts.  The other major speakers of the day were Massachusetts Governor Deval Patrick and the co-founder of the global Intelligent Community Forum, Lou Zacharilla.

The presentation was an extension of my work with the US Conference of Mayors on future-oriented (network-based) economic growth.

Here’s a PDF of the presentation SmarterCape Summit

© 2011 Norman Jacknis