Updates Of Earlier Reports

Some of my blog posts seem to be ahead of news reported elsewhere,
which is ok with me, but also means that it might be helpful to list
some interesting articles that continue past stories.  Here are some
recent examples:

  • My two-part series in March on the Coding Craze
    questioned the long term value of the plan by many public officials to
    teach computer coding. While the general news media continue to talk and
    write about coding as an elixir for your career, WIRED Magazine
    recently ran a cover story titled “The End of Code”.  See their web
    piece at http://www.wired.com/2016/05/the-end-of-code/
image
  • I’ve
    written several posts on one of my special interests – the related
    subjects of mixed reality, virtual reality, blended physical and digital
    spaces. I noted sports as a natural for this, including highlighting the Trilite project last year.  So it was great to read
    the announcement in the last few days that NBC and Samsung are
    collaborating to offer some of the Rio Olympics on Samsung VR gear.
image
  • We’re
    all inundated with talk about how “things are changing faster than ever
    before” in our 21st century world. Taking an unconventional view, in
    2011, I asked “Telegraph vs. Internet: Which Had Greater Impact?
    My argument was that the first half of the 19th century had much more
    dramatic changes, especially in speeding up communications.  In what I
    think is the first attempt to question the fastest-ever-changes meme,
    the New York Times Magazine also recently elaborated on this theme in an
    Upshot article titled “What Was the Greatest Era for Innovation? A Brief Guided Tour”.
image
  • In “Art and the Imitation Game”,
    March 2015, I wrote about how artificial intelligence is stepping into
    creative activities, like writing and painting. While there have been
    many articles on this subject since, one of the most intriguing was from
    the newspaper in the city with more attorneys per capita than anywhere
    else, as the Washington Post invited us to “Meet ‘Ross,’ the newly hired legal robot”.
image
  • I wrote about the White House Rural Telehealth meeting in April this year. The New York Times later had a report on the rollout of telehealth to the tens of millions of customers of Anthem, under the American Well label.
  • Going
    back several years and in both that post and one on “The
    Decentralization Of Health Care” about a year and a half ago, I’ve
    touched on the difficulties posed by the fee for service health care
    system in the US and instead wondered if we would be better off by
    paying health systems a yearly fee to keep us healthy – thus aligning
    our personal interests with those of the system. So it has been
    interesting to see in April that there was movement on this by the
    Center for Medicare and Medicaid Services (CMMS), which is the Federal
    government’s health insurance agency.  Here are just some examples:
  1. The End of Fee For Service?  
  2. CMS launches largest-ever multi-payer initiative to improve primary care in America
  3. Obamacare [SIC] to launch new payment scheme

That’s it for now.  I’ll try to update other posts when there’s news.

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/146994917695/updates-of-earlier-reports]

Beyond The Craze About Coding

In last week’s post on the Coding Craze, I referred to the continuing
reduction in the need for low level coding – even as what is defined as
low level continues to rise and be more abstract, more separated from
the machines that the software controls.  I even noted the work in
artificial intelligence to create programs that can program.

All
of this is a reflection of the fact that pure coding itself is only a
small part of what makes software successful – something that many
coding courses don’t discuss.

Many years ago in the programming
world, there was a relatively popular methodology named after its two
creators – Yourdon and DeMarco.  While it has been mostly been
remembered for its use of data flow diagrams, there was something else that it taught which too many coders don’t realize.

There
is a difference between what is logically or conceptually going on in a
business and the particular way it is implemented.  Yourdon asks
software designers to first figure out what is essential or as he put it:

“The
essential system model is a model of what the system must do in order
to satisfy the user’s requirements, with as little as possible (and
ideally nothing) said about how the system will be implemented. … this
means that our system model assumes that we have perfect technology
available and that it can be readily obtained at zero cost.  [Note: this is a lot closer to reality today than it was when he wrote about zero cost.]

“Specifically,
this means that when the systems analyst talks with the user about the
requirements of the system, the analyst should avoid describing specific
implementations of processes … he or she should not show the system
functions being carried out by humans or an existing computer system. …
these are arbitrary choices of how the system might be implemented; but
this is a decision that should be delayed until the systems design
activity has begun.”

Thinking this way about the world operates
and how you want it to operate is the start of software design.  
Software design really has two, related, meanings – like C++
overloading.  

First, there is the design of the architecture of
the software and overall solution.  Diving into coding without doing
this design is what leads to persistent and embarrassing bugs in
software.  The internal design is also necessary to avoid spaghetti code
that is hard to fix and to improve performance, even in these days of a
supposed abundance of compute resources.

image

Second, there is the design of the interface that the user sees –
with all the things to worry about that we associate with the “design
thinking” movement.

(One way of planning software is to imagine
the software designer is a playwright, who is responsible for all the
parts of the play aside from one, the user’s part.  I guess this is more
like improv than a play, but you get the idea 🙂 )

So maybe in
addition to the coding class, the wanna-be software developer should go
to improv or drama school.  That’s a more likely path to knowing how to
generate the WOW! reaction from users that makes for software success.

image

© 2016 Norman Jacknis, All Rights Reserved

[http://njacknis.tumblr.com/post/141545808566/beyond-the-craze-about-coding
]

Intelligent Conversations: The New User Interface?

Recently there have been some interesting articles about how the graphic user interface we’ve had on our screens for many years is gradually being replaced by a new user interface – the conversation.  

Earlier this month, Matt Gilligan wrote on his Medium blog

Forget “there’s an app for that” — what’s next is “there’s a chat for that.”

And just a few days ago, WIRED magazine had an article titled “The Future of UI Design? Old-School Text Messages”.

Some of this is a result of the fact that people are more often using the web on their smart phones and tablets than on laptops and desktop computers.  With bigger screens, the older devices have more room for a nice graphic interface than smartphones – even the newest smart phones that always seem to be bigger than the previous generation.

And many people communicate much of the day through conversations that are composed of text messages.  There’s a good listing of some of the more innovative text apps in “Futures of text”.

The idea of a conversational interface is also a reflection of the use of various personal assistants that you talk to, like Siri.  These, of course, have depended on developments in artificial technology, in particular the recognition and processing of natural (human) spoken language.  Much research is being conducted to make these better and less the target of satire – like this one from the Big Bang Theory TV series.

There’s another branch of artificial intelligence research that should be resurrected from its relative oblivion to help out – expert systems.  An expert system attempts to automate the kind of conversation – especially a dynamic, intelligent sequence of questions and answers – that would occur between a human expert and another person.  (You can learn more at Wikipedia and GovLab.)

image

In the late 1980s and early 1990s, expert systems were the most hyped part of the artificial intelligence community.  

As I’ve blogged before, I was one of those involved with expert systems during that period.  Then that interest in expert systems rapidly diminished with the rise of the web and in the face of various technological obstacles, like the hard work of acquiring expert knowledge.   More recently, with “big data” being collected all around us, the big focus in the artificial intelligence community has been on machine learning – having AI systems figure out what that data means.

But expert systems work didn’t disappear altogether.  Applications have been developed for medicine, finance, education and mechanical repairs, among other subjects.

It’s now worth raising the profile of this technology much higher if the conversation becomes the dominant user interface.  The reason is simple: these conversations haven’t been very smart.  Most of the apps are good at getting basic information as if you typed it into a web browser.  Beyond that?  Not so much.

There are even very funny videos of the way these work or rather don’t work well.  Take a look at “If Siri was your mom”, prepared for Mother’s Day this year with the woman who was the original voice of Siri as Mom.  

In its simplest form, expert systems may be represented as a smart decision tree based on the knowledge and research of experts.

image

It’s pretty easy to see how this approach could be used to make sure that the conversation – by text or voice – is useful for a person.

There is, of course, much more sophistication available in expert systems than is represented in this picture.  For example, some can handle probabilities and other forms of ambiguity.  Others can be quite elaborate and can include external data, in addition to the answers from a person – for example, his/her temperature or speed of typing or talking.

The original developers of Siri have taken what they’ve learned from that work and are building their next product.  Called “Viv: The Global Brain”, it’s still pretty much in stealth mode so it’s hard to figure out how much expert system intelligence is built into it.  But a story about them on WIRED last year showed an infographic which implies that an expert system has a role in the package.  See the lower left on the second slide.

image
image

Personally I like the shift to a conversational interface with technology since it becomes available in so many different places and ways.  But I’ll really look forward to it when those conversations become smarter.  I’ll let you know as I see new developments.

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/122879360432/intelligent-conversations-the-new-user-interface]

Art And The Imitation Game

The
recent movie, the Imitation Game, brought attention to the Turing Test to a general audience.  First proposed by the British mathematician
Alan Turing, the test basically proposes that computers will have achieved
artificial intelligence when a person interacting with that computer cannot
distinguish between it and another human being.

Last
year, it was reported that a machine successfully passed the Turing Test – sort
of.  (See this article
in the Washington Post, for example.) 
While that particular test didn’t set a very high standard, there is no
doubt that machines are getting better at doing things that humans only used to
do.

This
past Sunday, there was an article in the New York Times Weekly Review titled “If
an Algorithm Wrote This, How Would You Even Know?
”  Its (presumably human) author warned us that
“a shocking amount of what we’re reading is created not by humans, but by
computer algorithms.” 

For
example, the Associated Press uses Wordsmith
from Automated Insights.  With its Quill product, Narrative
Science originally started out with sports reporting, but is now moving into
other fields.

The
newspaper even offered its own version of the Turing Test – a test
of your ability to determine when a paragraph was written by a person or a
machine.  Try it.  (Disclosure: I didn’t get it right 100% of
the time, either.)

But,
of course, it doesn’t stop with writing. 
More interesting is the use of computers to be creative.

This is because
among the differences between humans and other animals, that some people claim,
is our ability to produce creative works of art.  Perhaps going back to the cave paintings,
this seems to be an unusual human trait. 
(Of course, you can always find someone else who will dispute this, as
in this article, “12
artsy animals that paint
”.)

While
we may find animals painting to be amusing, perhaps we’d find machines becoming
creative as more threatening. 

Professor
Simon Colton is one of the leaders in this field – which, by the way, goes back
at least two decades.  He has written:

“Our position is that, if we perceive that the software has been skillful, appreciative and imaginative, then, regardless of the behaviour of the consumer or programmer, the software should be considered creative.”

He and his team have worked with software called Painting Fool.  This post has some examples of that artwork, so you can judge for yourself if you could tell a computer generated it.

I have my own little twist on this story from more than ten years ago.  I met some artist/businessmen who designed and then had painted high end rock concert t-shirts.  These were intended to be sold at the concert in relatively small quantities, as an additional form of revenue.  

The artist would prepare the design and then have other people paint them.  But this was a slow tedious process so we discussed the use of robots to take over this process.  (At the time, the role of robotic painting machines in auto factories was becoming well known.)  

One of the businessmen posed an obstacle by noting that people bought the hand-painted product because each was slightly different, given the variation between artists who painted them and even the subtle changes of one artist on any day.  I somewhat shocked him by pointing out that, yes, even that kind of randomness could be computer generated and his customers would not likely be able to tell the difference.  

But, perhaps, computers could tell the difference.  A computer algorithm correctly identified Jackson Pollock paintings, as reported in a recent article in the International Journal of Art and Technology.  (A less technical summary of this work can be found in a Science Spot article of a few weeks ago.)

In the end, they didn’t use robots because they were too expensive compared to artists in the Philippines or wherever it was they hired them.  Now, the robots are much cheaper, so maybe I should revive the idea 

Anyway, we’re likely to see even more impressive works of creativity by computer software and/or by artists working with computer software.  The fun is just beginning.

image
image
image

© 2015 Norman Jacknis

[http://njacknis.tumblr.com/post/113350817355/art-and-the-imitation-game]

Isn’t There A Better Way To Build Government Software?

The awful performance of healthcare.gov has been a staple of the news as well as satire.  This cartoon in the New Yorker this week sums of the frustration of users – http://www.newyorker.com/humor/issuecartoons/2013/11/04/cartoons_20131028#slide=4

I normally don’t like to comment on hot news stories, but this one offers just too much of a teachable moment, especially for public officials who are not technologists, yet who will suffer public criticism when things go bad.

It’s worth noting that this is not the only case of Federal IT system problems.  Before healthcare.gov, the great cost and the long delays of the FBI case management system were in the news.  And, not to be outdone, New York City had a major scandal with its timekeeping system, both for the huge cost (close to $750 million) and the fact that there was a significant amount of that money diverted into the personal pockets of the project staff.

Indeed, costly and disastrous software projects are not just found in the public sector.  It only seems so because the public sector problems are more visible thanks to taxpayer funding, whereas the private sector can keep its mistakes better hidden.

In part, this review of bad projects reminds me of an old line in IT project management.  “Of the three goals of any the project – being within budget, on time, and of good quality – you’re usually only going to get two, but not all three.”  But, for the Affordable Care Act, is none of the three some kind of trifecta?

Part of the problem is that these projects cost way too much money.  That’s often because of rules intended to ensure everything is above board and serves the taxpayers’ interest, but which have the reverse effect.  And so ultimately, the purpose of the procurement rules is, as a well-intentioned government attorney once told me, to follow the rules – not necessarily to get maximum value for the taxpayers.  The big companies that dominate Federal technology projects have learned to master these rules and not necessarily do the best job.  Their reputations are also victims of this focus on process, rather than outcomes.

Another reason for the bloating of these projects is simple ego.  Some top executives have the belief that big important projects should have big budgets. 

Unfortunately, this attitude fails to distinguish the cost of writing the software from deploying it.  The cost of software development does not increase much if the number of users is 100 or 100,000. 

Obviously, the cost of deploying that software will increase in a linear fashion the more people who use it because the deployment may require more servers, more complex database arrangements, etc.

But spending hundreds of millions just to develop the software is usually unjustified.

So what can be done by executives who have to deliver new systems to the public, but feel they are enveloped in a fog of technical jargon so don’t question things until it’s too late.

Consider these alternative ways of handling software projects, none of which is really new, but seem to be not well known to non-technologists in both government and elsewhere:

  • Adopt the agile approach – this means having frequent deliverables and thus taking advantage of learning by users and developers.  It stands in contrast to the traditional practice of big requirements documents and all at once delivery of mammoth amounts of code.
  • Frequent testing, which is also a part of the agile approach.  It’s so important that some people build the test before the software.  After all, proof of the pudding is in the eating and it’s useful to know you’re off course earlier rather than later.
  • Parameterize.  This is something you don’t hear too much about, but it’s something I’ll mention here because some of the healthcare.gov vendors blamed their problems on changes in Federal decision making about whether users need to register first.  I would always tell my development staff this simple rule – if the debate about whether some requirement should be X or Y will take longer to resolve than it takes to program it both ways, then program it both ways by creating a parameter that will switch the system one way or the other.  Don’t let these debates hold up progress of software development.  (By the way, if the debate is that hot, there is good reason to expect the decision to change in the future, which will cost more money in future programming.  So parameterize and let the decision makers argue and change their minds as much as they want.)
  • Gradual scaling – don’t roll out a big new piece of software to the whole world at once.  (Do we really need to say this?)  If the scale of deployment is expected to be a possible problem, why not minimize the problem by taking it in several steps and, again, learning what needs to be improved.  Even experienced Broadway veterans try out the show on the road first.
  • Simplify deployments by using a scalable infrastructure.  There’s much discussion about “the cloud”, which is really just a good marketing term for the vast scale of computing resources available over the Internet.  Use it instead of trying to reinvent this vast scale, which is impossible for any organization, no matter big.  Many Internet businesses you’ve dealt with use these resources to handle peak demand or initial rollouts. 

I could go on, but these guidelines are normally enough to keep your next big systems project out of the headlines.

© 2013 Norman Jacknis

[http://njacknis.tumblr.com/post/65612237318/isnt-there-a-better-way-to-build-government-software]

Games As The New Model For Business Software?

For most of the last couple of decades, business software has been pretty much the same – a series of forms that essentially automated the company procedure manuals, which preceded the days of computers.   Yes, with Windows and the Mac, those forms became prettier, but they’re usually still some kind of form.  And, aside from down lists of things like the list of countries or states, there isn’t much intelligence behind those forms.

It’s time to change that approach and learn from enormously popular digital games.  Learn what exactly?

For too many people, games are all about vicarious shoot-em-up scenes or, at best, fantasy adventures.   Business software designers could have some fun applying those techniques – imagine a “killer sales” app :-). But there’s a deeper level of interaction with technology that game designers have discovered.  

Game software does three things well.   First it motivates people to continue to play the game.  Second, it’s conversational, offering frequent bite-sized interactions with users.  Third, it adjusts to the user’s behavior, indeed learns from what the user does – while the user is also learning.

Compare that to typical business software.  The motivation to use it is mostly external – your paycheck or your desire to get something from an unfeeling bureaucracy.  The form is big and long, like a lecture or sermon, not a conversation.  And the course of the interaction varies little from person to person; there’s little learning on either side of the interaction.

Is it any wonder that game players feel so much more engaged than users of business software?

But the three aspects of good game software can easily be adapted to the business world and anyone undertaking a major development of business software today should learn from those techniques.

Three additional observations about this:

  • Motivation is not just about how many points you can rack up compared to others.  The best game designers provide support for the range of human motivations in order to help the many different kinds of players.  So, for some, the motivation is very much about winning the competition.  For others, social approval in the form of likes and other recognition is more important.  For others, getting it 100% right is the goal.  
  • More generally, it’s important to realize that there is a sophisticated use of this tool and also a simplistic use.  Indeed, not every game that’s sold is a good example of the value of gamification.
  • The funny thing is that many of the people I meet who have some control over the development of software in their organizations are avid game players.  Yet they ignore the lessons of their personal life when planning what will happen in the business.  Does that make sense?

© 2013 Norman Jacknis

[http://njacknis.tumblr.com/post/62900227376/games-as-the-new-model-for-business-software]

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
– Quicksort developer C.A.R. Hoare

Simplicity in design applies to government policies and programs as much as it does in the creation of software. 

© 2012 Norman Jacknis

[http://njacknis.tumblr.com/post/18147813569/there-are-two-ways-of-constructing-a-software]

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
– Quicksort developer C.A.R. Hoare

[My comment: Although this quote is about software, it also applies to the design of government policies.]