.comment-link {margin-left:.6em;}

Tuesday, October 06, 2020

World beating system crashes

I am not clear exactly how much the UK Government's 'world-beating' test and trace system for England is costing, or even if the private company they are employing to do it is responsible for the latest cock-up, whereby they misplaced 16,000 tests and put goodness knows how many people at risk through not being informed of their encounter with the virus, but my lack of surprise at what happened certainly sums up the competence of this government.

The Guardian reports that the data error, which led to 15,841 positive tests being left off the official daily figures, means than 50,000 potentially infectious people may have been missed by contact tracers and not told to self-isolate.

The paper says that Public Health England was responsible for collating the test results from public and private labs, and publishing the daily updates on case count and tests performed:

But the rapid development of the testing programme has meant that much of the work is still done manually, with individual labs sending PHE spreadsheets containing their results. Although the system has improved from the early days of the pandemic, when some of the work was performed with phone calls, pens and paper, it is still far from automated.

In this case, the Guardian understands, one lab had sent its daily test report to PHE in the form of a CSV file – the simplest possible database format, just a list of values separated by commas. That report was then loaded into Microsoft Excel, and the new tests at the bottom were added to the main database.

But while CSV files can be any size, Microsoft Excel files can only be 1,048,576 rows long – or, in older versions which PHE may have still been using, a mere 65,536. When a CSV file longer than that is opened, the bottom rows get cut off and are no longer displayed. That means that, once the lab had performed more than a million tests, it was only a matter of time before its reports failed to be read by PHE.

Microsoft’s spreadsheet software is one of the world’s most popular business tools, but it is regularly implicated in errors which can be costly, or even dangerous, because of the ease with which it can be used in situations it was not designed for.

In 2013, an Excel error at JPMorgan masked the loss of almost $6bn (£4.6bn), after a cell mistakenly divided by the sum of two interest rates, rather than the average. The news led James Kwak, a professor of law at the University of Connecticut, to warn that Excel is “incredibly fragile”.

“There is no way to trace where your data comes from, there’s no audit trail (so you can overtype numbers and not know it), and there’s no easy way to test spreadsheets, for starters. The biggest problem is that anyone can create Excel spreadsheets – badly. Because it’s so easy to use, the creation of even important spreadsheets is not restricted to people who understand programming and do it in a methodical, well-documented way,” Kwak wrote.

In this case Public Health England was using the old xls format of Excel that could only accommodate files with around 1400 tests (each one using several rows) - and a surge in positive tests caught them by surprise. This is yet another foul-up caused by an over-reliance on ICT that is not fit for purpose, by people who don't understand its limitations.

And this will not be the last time this happens. Many public sector bodies are using out-of-date software that is not only not fit for purpose, but vulnerable to attack from outside agents. When will we learn the lesson and invest properly in sorting this out?
Comments:
Don't get locked into propitiatory software - go for open software, libreoffice for example - and large data should not be on a spreadsheet either, but on a database....
 
Better still, employ IT experts (there are still some in government service) to design a bespoke system.

 
Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?