The Second Wave Of Capital

I have been doing research about the future impact of artificial intelligence on the economy and the rest of our lives. With that in mind, I have been reading a variety of books by economists, technologists, and others.That is why I recently read “Capital and Ideology” by Thomas Piketty, the well-known French economist and author of the best-selling (if not well read) “Capital in the Twenty-First Century”. It contains a multi-national history of inequality, why it happened and why it has continued, mostly uninterrupted.

At more than 1100 pages, it is a tour de force of economics, history, politics and sociology. In considerable detail, for every proposition, he provides reasonable data analyses, which is why the book is so long. While there is a lot of additional detail in the book, many of the themes are not new, in part because of Piketty’s previous work.  As with his last book, much of the commentary on the new book is about income and wealth inequality.  This is obviously an important problem, although not one that I will discuss directly here.

Instead, although much of the focus of the book is on capital in the traditional sense of money and ownership of things, it was his two main observations about education – what economists call human capital – that stood out for me. The impact of a second wave and a second kind of capital is two-fold.

  1. Education And The US Economy

From the mid-nineteenth century until about a hundred years later, the American population had twice the educational level of people in Europe. And this was exactly the same period that the American economy surpassed the economies of the leading European countries. During the last several decades, the American population has fallen behind in education and this is the same time that their incomes have stagnated.  It is obviously difficult to tease out the effect of one factor like education, but clearly there is a big hint in these trends.

As Piketty writes in Chapter 11:

The key point here is that America’s educational lead would continue through much of the twentieth century. In 1900–1910, when Europeans were just reaching the point of universal primary schooling, the United States was already well on the way to generalized secondary education. In fact, rates of secondary schooling, defined as the percentage of children ages 12–17 (boys and girls) attending secondary schools, reached 30 percent in 1920, 40–50 percent in the 1930s, and nearly 80 percent in the late 1950s and early 1960s. In other words, by the end of World War II, the United States had come close to universal secondary education.

At the same time, the secondary schooling rate was just 20–30 percent in the United Kingdom and France and 40 percent in Germany. In all three countries, it is not until the 1980s that one finds secondary schooling rates of 80 percent, which the United States had achieved in the early 1960s. In Japan, by contrast, the catch-up was more rapid: the secondary schooling rate attained 60 percent in the 1950s and climbed above 80 percent in the late 1960s and early 1970s.

In the second Industrial Revolution it became essential for growing numbers of workers to be able to read and write and participate in production processes that required basic scientific knowledge, the ability to understand technical manuals, and so on.

That is how, in the period 1880–1960—first the United States and then Germany and Japan, newcomers to the international scene—gradually took the lead over the United Kingdom and France in the new industrial sectors. In the late nineteenth and early twentieth centuries, the United Kingdom and France were too confident of their lead and their superior power to take the full measure of the new educational challenge.

How did the United States, which pioneered universal access to primary and secondary education and which, until the turn of the twentieth century, was significantly more egalitarian than Europe in terms of income and wealth distribution, become the most inegalitarian country in the developed world after 1980—to the point where the very foundations of its previous success are now in danger? We will discover that the country’s educational trajectory—most notably the fact that its entry into the era of higher education was accompanied by a particularly extreme form of educational stratification—played a central role in this change.

In any case, as recently as the 1950s inequality in the United States was close to or below what one found in a country like France, while its productivity (and therefore standard of living) was twice as high. By contrast, in the 2010s, the United States has become much more inegalitarian while its lead in productivity has totally disappeared.

  1. The Political Competition Between Two Elites

By now, most Americans who follow politics understand that the Democratic Party has become the favorite of the educated elite, in addition to the votes from minority groups. This coalition completely reverses what had been true of educated voters in most of the last century, who were reliable Republican voters. In the process, the Democratic Party has lost much of its working-class base.

The Republicans have been the party of the economic elite, although since the 1970s some of the working-class have joined in, especially those reacting to increased immigration and civil rights movements.

What Piketty points out is that, in this transition, working-class and lower income people have decreased their political participation, especially voting. He thinks that is because these voters felt that the Democratic Party has been taken over by the educational elite and no longer speaks for them.

What many Americans may not have realized is that this same phenomenon has happened in other economically advanced democracies, such as the UK and France. Over the longer run, Piketty wonders whether such an electoral competition between parties both dominated by elites can be sustained – or whether the voiceless will seek violence or other undemocratic outlets for their political frustrations.

In Chapter 14, he notes that, at the same time that the USA has lost the edge arising from a better educated population, it and other advanced economies that have now matched or surpassed the American educational level, have elevated education to a position of political power.

We come now to what is surely the most striking evolution in the long run; namely, the transformation of the party of workers into the party of the educated.

Before turning to explanations, it is important to emphasize that the reversal of the educational cleavage is a very general phenomenon. What is more, it is a complete reversal, visible at all levels of the educational hierarchy. we find exactly the same profile—the higher the level of education, the less likely the left-wing vote—in all elections in this period, in survey after survey, without exception, and regardless of the ambient political climate. Specifically, the 1956 profile is repeated in 1958, 1962, 1965, and 1967.

Not until the 1970s and 1980s does the shape of the profile begin to flatten and then gradually reverse. The new norm emerges with greater and greater clarity as we move into the 2000s and 2010s. With the end of Soviet communism and bipolar confrontations over private property, the expansion of educational opportunity, and the rise of the “Brahmin left,” the political-ideological landscape was totally transformed.

Within a few years the platforms of left-wing parties that had advocated nationalization (especially in the United Kingdom and France), much to the dismay of the self-employed, had disappeared without being replaced by any clear alternative.

A dual-elite system emerged, with on one side, a “Brahmin left,” which attracted the votes of the highly educated, and on the other side, a “merchant right,” which continued to win more support from both highly paid and wealthier votes.

This clearly provides some context for what we have been seeing in recent elections.  And although he is not the first to highlight this trend, the evidence that he marshals is impressive.

Considering how much there is in the book, it is not likely anyone, including me, would agree with all of the analysis. In addition to the analysis, Piketty goes on to propose various changes in taxation and laws, which I will discuss in the context of other writers in a later blog. For now, I would only add that other economists have come to some of the same suggestions as Piketty, although they have completed a very different journey from his.

For example, Daniel Susskind in The End Of Work is concerned that a large number of people will not be able to make a living through paid work because of artificial intelligence. The few who do get paid and those who own the robots and AI systems will become even richer at most everyone else becomes poorer. This blends with Piketty’s views and they end up in the same place – a basic citizen’s income and even a basic capital allotment to each citizen, taxation on wealth, estate taxes, and the like.

We will have much to explore about these and other policy issues arising from the byproducts of our technology revolution in this century.

© 2020 Norman Jacknis, All Rights Reserved

Are Computers Learning On Their Own?

To many people, the current applications of artificial intelligence, like your car being able to detect where a lane ends, seem magical. Although most of the significant advances in AI have been in supervised learning, it is the idea that the computer is making sense of the world on its own — unsupervised — which intrigues people more.

If you’ve read some about artificial intelligence, you may often see a distinction between supervised and unsupervised learning by machine. (There are other categories too, but these are the big two.)

In supervised learning, the machine is taught by humans what is right or wrong — for example, who did or did not default on a loan — and it eventually figures out what characteristics would best predict a default.

Another example is asking the computer to identify whether a picture shows a dog or a cat. In supervised learning, a person identifies each picture and then the computer figures out the best way to distinguish between each — perhaps whether the animal has floppy ears 😉

Even though the machine gets quite good at correctly doing this, the underlying model of the things that predict these results is also often opaque. Indeed, one of the hot issues in analytics and machine learning these days is how humans can uncover and almost “reverse engineer” the model the machine is using.

In unsupervised learning, the computer has to figure out for itself how to divide a group of pictures or events or whatever into various categories. Then the next step is for the human to figure out what those categories mean. Since it is subject to interpretation, there is no truly accurate and useful way to describe the categories, although people try. That’s how we get psychographic categories in marketing or equivalent labels, like “soccer moms”.

Sometimes the results are easy for humans to figure out, but not exactly earth shattering, like in this cartoon.

https://twitter.com/athena_schools/status/1063013435779223553

In the case of the computer that is given a set of pictures of cats and dogs to determine what might be the distinguishing characteristics, we (people) would hope that computer would figure out that there are dogs and cats. But it might instead classify them based on size — small animals and big animals — or based on the colors of the animals.

This is all sounds like it is unsupervised. Anything useful that the computer determines is thus part of the magic.

How Unsupervised Is Unsupervised Machine Learning?

Except, in some of the techniques of unsupervised learning, especially in cluster analysis, a person is asked to determine how many clusters or groups there might be. This too limits and supervises the learning by the machine. (Think about how much easier it is to be right in Who Wants To Be A Millionaire if the contestant can narrow down the choices to two.)

Even more important, the computer can only learn from the data that it is given. It would have problems if pictures of a bunch of elephants or firetrucks were later thrown into the mix. Thus, the human being is at least partially supervising the learning and certainly limiting it.  The machine’s model is subject to the limitations and biases of the data that it learned on.

Truly unsupervised learning would occur the way that it does for children. They are let out to observe the world and learn patterns, often without any direct assistance from anyone else. Even with over-scheduling by helicopter parents, children can often freely roam the earth and discover new data and experiences.

Similarly, to have true unsupervised learning of machines, they would have to be able to travel and process the data they see.

At the beginning of his book Life 3.0, Max Tegmark weaves a sci fi tale about a team that built an AI called Prometheus. While it wasn’t directly focused on unsupervised classification, Prometheus was unsupervised and learned on its own. It eventually learned enough to dominate all mankind. But even in this fantasy world, its unsupervised escape only enabled the AI machine to roam the internet, which is not quite the same thing as real life after all.

It is likely, for a while longer, that a significant portion of human behavior will occur outside of the internet 🙂

(And, as we saw with Microsoft’s chatbot Tay, an AI can also learn some unfortunate and incorrect things on the open internet.)

While not quite letting robots roam free in the real world, researchers at Stanford University’s Vision and Learning Lab “have developed iGibson, a realistic, large-scale, and interactive virtual environment within which a robot model can explore, navigate, and perform tasks.” (More about this at A Simulated Playground for Robots)

https://time.com/3983475/hitchbot-assault-video-footage/

There was HitchBOT a few years ago which traveled around the US, although I don’t think that it added to its knowledge along the way, and it eventually met up with some nasty humans. (For more see here and here.)

 

Perhaps self-driving cars or walking robots will eventually be able to see the world freely as we do. Ford Motor Company’s proposed delivery robot roams around, but it is not really equipped for learning. The traveling, learning machine will likely require a lot more computing power and time than we currently use in machine learning.

Of course, there is also work on the computing part of problem, as this July 21st headline shows, “Machines Can Learn Unsupervised ‘At Speed Of Light’ After AI Breakthrough, Scientists Say.” But that’s only the computing part of the problem and not the roaming around the world part.

These more recent projects are evidence that the AI researchers realize their models are not being built in a truly unsupervised way. Despite the hoped-for progress of these projects, for now, that is why data scientists need to be careful how they train and supervise a machine even in unsupervised learning mode.

© 2020 Norman Jacknis, All Rights Reserved