News

Humans, Algorithms and Assessing Risk

May 3 2018

Another day.  Another IT failure, this time in the NHS where a “computer algorithm failure” meant that for the last 9 years women over 68 were not called for breast cancer screening.  Some may have died as a result.

Eminent cancer specialist, Karel Sikora made the point today that this should have been spotted sooner – “Alarm bells should have rung sooner based on a simple observation of the patients who were coming and going.  The fact that they didn’t is, I think, indicative of a problem – a blind spot – that exists across the health service.”

And what might that blind spot be?  Well, a belief in the infallibility of the technology they were using.  “They are no longer as tuned into what they are seeing or what their instinct and experience might be telling them.”   A blind spot found in many sectors other than the NHS.

And spare a thought for poor old “instinct and experience” not to mention the evidence in front of your eyes.  Too often seen as not possible to measure and, therefore, of no use.

But even that technology titan, Elon Musk, recently acknowledged that the reason for delays with the production of its latest model was an over-reliance on automation (in this case, a naughty flufferbot), adding in a tweet on 13 April: “Humans are underrated.”

Indeed they are.  Any effective assessment of risk should never rely on one source only.  What you see in front of your eyes is as important as what you see on a screen. Technology is part of the answer, never the whole answer.  Experience and judgment also matter.

 

Photo by 수안 최 on Unsplash

Back to all news