.comment-link {margin-left:.6em;}

Sunday, May 06, 2018

Computers don't make errors, human do

As health secretary, Jeremy Hunt, stood up in the House of Commons to make a statement apologising for what he termed a “computer algorithm failure dating back to 2009”, as a result of which an estimated 450,000 women between the ages of 68 and 71 had not been invited to their final screening for breast cancer, my first thought was that computers don't make errors of this nature.

This is not to pigeon-hole me as a new technology geek who believes in a fully-automated future. I refuse to use internet banking for example, as I don't trust it. In my local bank I prefer to deal with human beings as opposed to the cash and paying-in machines, and I am a late adopter of advancements in technology, preferring to wait until software and hardware is tested before acquiring. You would never catch me wearing smart glasses or a smart watch.

But I do use technology. I just prefer to keep it in perspective. And perspective is what we need when seeking to attach blame or finding explanations for disastrous failures such as that surrounding breast screening.

The fact is that, by-and-large, computers do what they are programmed to do. If a mistake has been made that put nearly half a million women at risk then that was a human error relating to wrong inputs or poor programming. People do make mistakes. We learn from them and from being honest with ourselves and others as to what they are. To stand up in Parliament and blame the computers is not just disingenuous therefore, it is also dangerous, because it might mean that the failure could be repeated.

My sympathies are with Dominic Lawson in the Sunday Times. He agrees that the breast screening error was down to humans, not the machine. But he also draws a distinction between holding the Secretary of State personally responsible for coding errors in computer programs somewhere in the entrails of the NHS behemoth, and on the other hand, the chief executive of TSB being personally responsible for pushing through a rushed switch to an off-the-peg computer system of uncertain compatibility, which he claimed would bring in “the humanisation of digital” and “unheard-of” speed and flexibility. leading to human misery and delay.

Computerisation is not an end in itself. It is a change-management process and should be treated as such. That means that managers need to look at how computers can improve and reform processes rather than seeking to put machines in for the sake of it.

Lawson gives an example of the flipside of this debate, the much greater speed and efficiency of the computer-based model:

A friend of mine who runs a data firm that has hospitals among its clients observed: “I used to work for the NHS, and we sent out letters individually, licking the stamps one by one. But now it’s hundreds of thousands of emails at the push of a button. And what took 10 days now takes half an hour.”

Lawson gives other examples of how new technology has speeded up and improved health treatments and diagnosis, giving doctors some breathing space to properly evaluate patients and prioritise their work.

Computerisation and new technology is a force for good. We are not yet at the stage where we are having to invoke Asimov's three laws, nor do I expect us to reach that zenith for some time. But let's not forget that at the end of the day these advancements are dependent on humans to work, and we are fallible.
Just as computers don't make errors, humans do, computerisation and new technology is neither a force for good or evil, but a mechanism that can be used for either by human beings. Thus database technology and massive scale data analysis can be used to help discover potential cures to disease. However, it can also be used to identify the best messages to promote in order to unduly influence and undermine democratic decisions.

We all need to remain aware of this.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?