Why do we tolerate human over machine error?
Blog reproduced with kind permission from RM Results, 20 November 2018.
Human error is cited as the primary contributing factor in major disasters and accidents in industries. In fact, research suggests that regardless of the activity or task, humans make 3 to 6 errors per hour and on average 50 errors per day (or at least, ‘per work shift’). In comparison, technical or machine malfunction cause only a small percentage.
Yet, even with this knowledge, we are more likely to forgive human error over machine – why is this?
To err is human; to forgive is divine
Making errors is an integral part of the way we humans live. When we first learn to walk, we are constantly falling down. Many believe that you cannot learn without making an error or two along the way. Therefore, it is ingrained in our psyche that it is okay for humans to make a mistake.
Not so with machines. When our electronic counterparts fail us – whether it’s a self-service checkout or an artificial intelligence (AI), we are not as quick to forgive.
Researchers at the University of Wisconsin recently conducted an experiment to see how easily we forgive AI compared to our human counterparts. At the outset of the experiment, the candidates reported an equal trust in both sources. However, that quickly changed after each made a mistake: when the AI erred, participants quickly ignored its advice or abandoned it; when a human advisor made a mistake, the researchers saw only a 5% drop in a candidate’s trust.
Research carried out by the British Science Association revealed a vast lack of trust in AI, with fears of being ‘taken over’ by technology. Conducted via an online survey, the results of the research found that 60 per cent of participants think that the use of robots or programmes equipped with AI will lead to fewer jobs in ten years. The results also showed that 36 per cent of the public believe that the development of AI poses a threat to the long-term survival of humanity.
Fear of robots: a misconception?
No one fears computers as such. Indeed, many of us cannot imagine our lives without them. Yet, when we talk about AI, our attitudes change.
In 2014, Elon Musk, CEO of Tesla, who are pioneering driverless cars with the use of AI and machine learning, labelled AI “our biggest existential threat”. Physicist Stephen Hawking, who died in March 2018, also expressed his concerns about AI, telling the BBC that “the development of full AI could spell the end of the human race.”
It is also less than encouraging that some AI programs exist to purposefully incite fear in us. In 2016, a group of MIT computer scientists created an AI network called “Nightmare Machine” which transforms photos into haunting imagery. Another group at MIT created the AI programme “Shelley” that creates stories with the sole purpose to scare us.
Both projects exist to better understand what barriers lie between human and machine collaboration. The hope is that the knowledge gained from the experiments will help combat any woes we have of AI in the future, as more organisations adopt AI and machine learning as a business tool, in an attempt to tackle human error in the workplace.
Machines in the workplace
Already in industries, such as the legal and finance industries, AI is making judgements and recommending investments. In e-assessment the basic use of AI, such as the automation of repetitive tasks is already underway.
However, as we create algorithms that are more sophisticated in a bid to improve efficiencies and reduce human error, this issue of trust could see the potential efficiencies gained disappear if employees or customers lose confidence and stop using these systems.
To understand how we can combat this lack of trust, the researchers behind the University of Wisconsin study are now looking into precisely why we trust humans over machines with their research: ‘Mortal vs Machine: Developing a model to understand the differences between human-humans and human-automation trust’. The hope is that the research will give us a guide on how to assure cynics of AI as we adopt it more widely.
At RM Results, we are already looking at what we can do to reduce the fear and improve trust of AI in assessment with the creation of our five level model for the adoption of machine marking in e-assessment.
Whilst it is clear that AI technology is advancing rapidly, what is not clear is when or whether society will learn to trust AI, and forgive mistakes as it grows.