26 September 2023

Echnodystopia: Are we heading towards a real-world blade runner?

Start the conversation

In 1982, Blade Runner floored audiences with its technodystopian depiction of the future. Alex Paterson, Gabby Bush and Jeannie Paterson* say that almost 40 years on, some of these projections seem eerily accurate.


Ridley Scott’s 1982 cult classic film, Blade Runner, takes us into a dystopian future that humankind has brought on itself through the rapid, unrestrained and ultimately chaotic development of new technologies.

First and foremost, this sci-fi noir film explores the dangers, uncertainties and moral and ethical ambiguities surrounding the creation of advanced Artificial Intelligence (AI).

The interactions between humans and the advanced androids, known as Replicants, portray a world in which the line between ‘real’ and ‘fake’ people is inextricably blurred.

In doing so, Blade Runner questions what it fundamentally means to be human, following four Replicants who have returned to Earth to meet their maker.

Blade Runner Rick Deckard (Harrison Ford) is then tasked with tracking down and eliminating the rogue AIs, who are asserting their right to live in a society that doesn’t recognise them as real people.

What is startling to remember is that the film was set in 2019.

So today, well past that date, can the dark predictions of Blade Runner provoke a reflection and even a deeper understanding of our relationship with technology? How successful is art and film at predicting our future?

Sci-fi predicting technological futures

The Replicants of Blade Runner, as the name suggests, are essentially AI systems given advanced bioengineered bodies designed to replicate the physical abilities and intellectual capacities of humans.

They’re put in dangerous scenarios without the need to risk actual human lives.

Despite many advances in this technology, these highly intelligent androids are far from existing in our world.

The technology today, three years after the setting of Blade Runner, is still far from creating actual Artificially Intelligent beings.

Beings like this – sometimes called general AI – are beyond the scope of our modern AI systems and technologies available today.

AI, as we know it, consists of technologies like machine learning algorithms, natural language processing and computer vision technologies.

This can work in surprising and sophisticated ways by identifying patterns and correlations to predict outcomes.

But AI is very far from understanding humans or having its own thoughts and feelings.

The robots we interact with are more likely to be the cute but inert Paro aged care seal or the somewhat creepy Boston Dynamic dancing dog.

While technologists might still mull over the existence of potentially dangerous ‘almost humans’ that are nearly impossible to distinguish from ‘real’ humans, experts in the field are more concerned about the hidden black box workings of manipulative and prejudiced algorithms that are making decisions about our jobs, money and freedom.

Experts are concerned too over the digital platforms sitting in moats of data that give them the ability to manipulate what we buy or how we vote.

Ethical implications of creating human-like robots

Although Replicants may still only exist in the realms of fantasy, Blade Runner still prompts relevant questions about human-computer interactions and the ethics of AI.

In the world of Blade Runner, Replicants are simply tools that are to be used for the benefit of their owners.

So, killing a Replicant isn’t referred to as execution like “real” people – they are “retired”.

And yet, the design of the Replicants intrinsically, yet also paradoxically, challenges their status as mere non-human tools and property.

Replicants are purposefully designed to be “virtually identical” to humans.

They look like humans, speak like humans and without investigation from a Blade Runner, are indistinguishable from humans.

And this idea goes strictly against ethical design for AI or robotic systems.

Many contemporary scholars of AI or robot ethics see something inherently deceptive about this mimicry, which both insults the human interacting with the robot and may also degrade the robot’s innate humanity.

What does it mean if, on deciding that a robot which strongly resembles a human is non-human, a human engages in cruel or vicious treatment of that robot? In the real world, it’s been suggested that one of the new ‘laws of robotics’ should require a robot to always identify itself as a robot, ultimately responsible to the humans who deployed it.

Robots and our relationships with them

These questions are interesting in understanding our relationships with technology and what it is to be human.

But the questions prompted by the Replicants and their relationship with Blade Runner also have real and current applications.

Should the chatbots we interact with dealing with banking, telco and airline providers identify themselves as artificial? What about Alexa? Google’s AI system, Duplex, was met with controversy after demonstrating it could book a restaurant because many felt that the deception involved in this practice was inherently wrong.

In Blade Runner, Deckard’s relationship with Rachael also reflects this concern, raising questions about whether AI should mimic human affection and emotion in their language.

The ethical and moral standing of a robot is questioned in many films, and in literature and art.

And often sci-fi films like Blade Runner depict robots with genuine thoughts, feelings and emotions as well as the deeply human desire to fight for their own survival.

Although humanoid robots are not likely in the foreseeable future, we do need laws to deal with the consequences of the hidden black box algorithms that are increasingly informing government and private sector decisions.

For humans, there are many laws and regulations that exist for our own protection – so should we have the same laws for robots?

*Alex Paterson Intern, Centre for Artificial Intelligence and Digital Ethics (CAIDE), Melbourne Law School, University of Melbourne. Gabby Bush Project Officer, Centre for Artificial Intelligence and Digital Ethics (CAIDE), Melbourne Law School, University of Melbourne. Jeannie Paterson Co-director, Centre for Artificial Intelligence and Digital Ethics (CAIDE); Professor, Melbourne Law School, University of Melbourne.

This article first appeared at pursuit.unimelb.edu.au.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.