27 September 2023

Blame game: Who is responsible when an algorithm messes up?

Start the conversation

Karen Hao* says it seems that when automated systems make a mistake, it’s the nearest human who gets the blame.


Photo: Kevin Ku

Early last month, Bloomberg published an article about an unfolding lawsuit over investments lost by an algorithm.

A Hong Kong tycoon lost more than $20 million after entrusting part of his fortune to an automated platform.

Without a legal framework to sue the technology, he placed the blame on the nearest human: the man who sold it to him.

It’s the first known case over automated investment losses, but not the first involving the liability of algorithms.

In March 2018, a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona, sending another case to court.

A year later, Uber was exonerated of all criminal liability, but the safety driver could face charges of vehicular manslaughter instead.

Both cases tackle one of the central questions we face as automated systems trickle into every aspect of society: Who or what deserves the blame when an algorithm causes harm?

Who or what actually gets the blame is a different yet equally important question.

Madeleine Clare Elish, a researcher at Data & Society and a cultural anthropologist by training, has spent the last few years studying the latter question to see how it can help answer the former.

To do so, she has looked back at historical case studies.

While modern AI systems haven’t been around for long, the questions surrounding their liability are not new.

The self-driving Uber crash parallels the 2009 crash of Air France flight 447, for example, and a look at how we treated liability then offers clues for what we might do now.

In that tragic accident, the plane crashed into the Atlantic Ocean en route from Brazil to France, killing all 228 people on board.

The plane’s automated system was designed to be completely “foolproof,” capable of handling nearly all scenarios except for the rare edge cases when it needed a human pilot to take over.

In that sense, the pilots were much like today’s safety drivers for self-driving cars — meant to passively monitor the flight the vast majority of the time but leap into action during extreme scenarios.

What happened the night of the crash is, at this point, a well-known story.

About an hour and a half into the flight, the plane’s air speed sensors stopped working because of ice formation.

After the autopilot system transferred control back to the pilots, confusion and miscommunication led the plane to stall.

While one of the pilots attempted to reverse the stall by pointing the plane’s nose down, the other, likely in a panic, raised the nose to continue climbing.

The system was designed for one pilot to be in control at all times, however, and didn’t provide any signals or haptic feedback to indicate which one was actually in control and what the other was doing.

Ultimately, the plane climbed to an angle so steep that the system deemed it invalid and stopped providing feedback entirely.

The pilots, flying completely blind, continued to fumble until the plane plunged into the sea.

In a recent paper, Elish examined the aftermath of the tragedy and identified an important pattern in the way the public came to understand what happened.

While a federal investigation of the incident concluded that a mix of poor systems design and insufficient pilot training had caused the catastrophic failure, the public quickly latched on to a narrative that placed the sole blame on the latter.

Media portrayals, in particular, perpetuated the belief that the sophisticated autopilot system bore no fault in the matter despite significant human-factors research demonstrating that humans have always been rather inept at leaping into emergency situations at the last minute with a level head and clear mind.

Humans act like a ‘liability sponge’

In other case studies, Elish found the same pattern to hold true: even in a highly automated system where humans have limited control of its behaviour, they still bear most of the blame for its failures.

Elish calls this phenomenon a “moral crumple zone.”

“While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.”

Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.

This pattern offers important insight into the troubling way we speak about the liability of modern AI systems.

In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted to focus on the distraction of the driver.

“We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish.

Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interact with.

Yet in the current regulatory vacuum, they will continue to pay the steepest cost.

Regulators should also have more nuanced conversations about what kind of framework would help distribute liability fairly.

“They need to think carefully about regulating sociotechnical systems and not just algorithmic black boxes,” Elish says.

In other words, they should consider whether the system’s design works within the context it’s operating in and whether it sets up human operators along the way for failure or success.

Self-driving cars, for example, should be regulated in a way that factors in whether the role safety drivers are being asked to play is reasonable.

“At stake in the concept of the moral crumple zone is not only how accountability may be distributed in any robotic or autonomous system,” she writes, “but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.”

* Karen Hao is artificial intelligence reporter for MIT Technology Review. She tweets at @_KarenHao.

This article first appeared at www.technologyreview.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.