Automation can leave us complacent, and that can have dangerous consequences

Is a hands off approach the right way to go when it comes to automation? Shutterstock/riopatuca

The recent fatal accident involving a Tesla car while self-driving using the car’s Autopilot feature has raised questions about whether this technology is ready for consumer use.

But more importantly, it highlights the need to reconsider the relationship between human behaviour and technology. Self-driving cars change the way we drive, and we need scrutinise the impact of this on safety.

Tesla’s Autopilot does not make the car truly autonomous and self-driving. Rather, it automates driving functions, such as steering, speed, braking and hazard avoidance. This is an important distinction. The Autopilot provides supplemental assistance to, but is not a replacement for, the driver.

In a statement following the accident, Tesla reiterated that Autopilot is still in beta. The statement emphasised that drivers must maintain responsibility for the vehicle and be prepared to take over manual control at any time.

Tesla says Autopilot improves safety, helps to avoid hazards and reduces driver workload. But with reduced workload, the question is whether the driver allocates freed-up cognitive resources to maintain supervisory control over Autopilot.

Automation bias

There is evidence to suggest that humans have trouble recognising when automation has failed and manual intervention is required. Research shows we are poor supervisors of trusted automation, with a tendency towards over-reliance.

Known as automation bias, when people use automation such as autopilot, they may delegate full responsibility to automation rather than continue to be vigilant. This reduces our workload, but it also reduces our ability to recognise when automation has failed, signalling the need to take back manual control.

See also  Exploding buses and plane crashes: why stuntmen are the unsung heroes of film

Automation bias can occur anytime when automation is over-relied on and gets it wrong. This can happen because automation was not set properly.

An incorrectly set GPS navigation will lead you astray. This happened to one driver who followed an incorrectly set GPS across several European countries.

More tragically, Korean Airlines flight 007 was shot down when it strayed into Soviet airspace in 1983, killing all 269 on board. Unknown to the pilots, the aircraft deviated from its intended course due to an incorrectly set autopilot.

Autocorrect is not always correct

Automation will work exactly as programmed. Reliance on a spell checker to identify typing errors will not reveal the wrong words used that were spelt correctly. For example, mistyping “from” as “form”.

Likewise, automation isn’t aware of our intentions and will sometimes act contrary to them. This frequently occurs with predictive text and autocorrect on mobile devices. Here over-reliance can result in miscommunication with some hilarious consequences as documented on the website Damn You Autocorrect.

Sometimes automation will encounter circumstances that it can’t handle, as could have occurred in the Tesla crash.

GPS navigation has led drivers down a dead-end road when a highway was rerouted but the maps not updated.

Over-reliance on automation can exacerbate problems by reducing situational awareness. This is especially dangerous as it limits our ability to take back manual control when things go wrong.

The captain of China Airlines flight 006 left autopilot engaged while attending to an engine failure. The loss of power from one engine caused the plane to start banking to one side.

See also  Just 15 centimetres of water can float a car – but we are failing to educate drivers about the dangers of floodwaters

Unknown to the pilots, the autopilot was compensating by steering as far as it could in the opposite direction. It was doing exactly what it had been programmed to do, keeping the plane as level as possible.

But this masked the extent of the problem. In an attempt to level the plane, the captain disengaged the autopilot. The result was a complete loss of control, the plane rolled sharply and entered a steep descent. Fortunately, the pilots were able to regain control, but only after falling 30,000 feet.

Humans vs automation

When automation gets it right, it can improve performance. But research findings show that when automation gets it wrong, performance is worse than if there had been no automation at all.

And tasks we find difficult are also often difficult for automation.

In medicine, computers can assist radiologists detect cancers in screening mammograms by placing prompts over suspicious features. These systems are very sensitive, identifying the majority of cancers.

But in cases where the system missed cancers, human readers with computer-aided detection missed more than readers with no automated assistance. Researchers noted cancers that were difficult for humans to detect were also difficult for computers to detect.

Technology developers need to consider more than their automation technologies. They need to understand how automation changes human behaviour. While automation is generally highly reliable, it has the potential to fail.

Automation developers try to combat this risk by placing humans in a supervisory role with final authority. But automation bias research shows that relying on humans as a backup to automation is fraught with danger and a task for which they are poorly suited.

See also  Super Bowl car ads sell Americans the idea that new tech will protect them

Developers and regulators must not only assess the automation technology itself, but also the way in which humans interact with it, especially in situations when automation fails. And as users of automation, we must remain ever vigilant, ready to take back control at the first sign of trouble.

The Conversation

David Lyell received a doctoral scholarship from the HCF Research Foundation.