27 September 2023

Let bots be bots: Why we should stop trying to make robots more human

Start the conversation

Paul Barba* says imbuing robots with a human name and personality can be counterproductive to their function and introduces all sorts of bias.


Photo: Ryan McGuire

In March, Bank of America launched its new in-app artificial intelligence (AI) powered assistant.

Named Erica, the bot presumably takes its name from “America.”

Eric surely could have sufficed as well, but giving a customer service bot a female moniker, and voice, has become common practice.

Think Alexa, Cortana, and Siri.

Even loading up an on-screen webchat is likely to bring up a female-sounding name.

To find their male counterparts, move into more technical, knowledge-oriented spaces.

Think IBM Watson (in contrast to IBM HR assistant “Myca”) and “legal advisor” bot ROSS.

There’s Kensho the financial analyst and Ernest the bank aggregator.

When it comes to the division of labour, AI has a bias problem — and not just when it comes to gender.

There’s also remarkable homogeneity of tonality and voice across these bots.

AI, after all, takes on the norms, associations, and assumptions of the data it “feeds” on and, in the case of AI bots, those of the people doing the building.

One solution is to identify and rectify these biases by making bots fairer and more equitable in their representations of gender, ethnicity, and cultural background.

Companies like Waze, for example, give users a range of options for bot identities.

Another is to stop building “human” bots.

Giving bots human names and identities stems from a belief that doing so helps humanise them, encouraging uptake and interaction.

But bots and smart devices are prevalent enough now that barriers against use have largely been broken down.

There’s no longer a need to create “human” bots in the hope that people will use them.

Why so human?

Given where we are with AI, why not just let bots be bots?

This both manages expectations around bot performance and minimises the impact of bias.

(Humans will always bring their own biases to a situation, of course.)

The Kai bot from Kasisto, for example, is built around a “bot” identity.

Its communications and interactions are shaped by this, rather than an attempt to be human.

Capital One’s Eno bot takes a similar approach and is carefully gender neutral.

Making your bot distinctly bot-like also signals to users that they shouldn’t necessarily expect a human-level conversation, which is crucial when you consider that today’s AI struggles when conversations move outside its core domain expertise.

However, even the “bot” approach comes with its own issues.

Implicit biases can easily carry over, especially given that “bot-bots” are still underpinned by frameworks created by and for humans.

Such identities can also be difficult to maintain in voice-based interactions, as voice is more likely to be encoded with gender and cultural markers than text is.

Too bot to trot

A “bot” identity strips away obvious gender and cultural biases, at least in theory.

But we could take things back a step further and strip out identity and personality from our bots.

Why would we do this?

Most bots today exist to solve problems in a streamlined manner.

Imbuing a bot with a personality and rich repertoire of witty retorts can actually have the opposite effect: it can create friction.

Think of the times you’ve interacted with a chatbot online.

The first couple of messages are usually introductory.

Removing this preamble and prompting the user to simply type their issue into the box is actually a much simpler approach.

Despite the “chatbot” name, users aren’t there to chat.

They’re there to solve a problem.

A new Turing Test

While personality can help build trust for digital assistants, the same level of trust isn’t required for a one-off interaction.

An ultra-human bot in this context is actually solving the wrong problem: it’s focusing on empathy rather than the needs and context of the interaction itself.

Where convenience is paramount, personality should take a back seat.

Similarly, highly personal or sensitive interactions may also benefit from the use of low-personality bots.

People are more likely to speak freely and honestly if they don’t feel they’re being judged — and machines don’t judge.

Input style matters here as well.

Text can feel more impartial than voice due to its lack of inflection.

While inflection is beneficial in some interactions, in others getting the tonality wrong is arguably worse than having none at all.

A bot with “personality” is also a precise mirror of a company and its values in a way that human staff aren’t.

Its motivations, personality traits and behaviours have all been shaped by design, and it can be easy for consumers to see a bot as an extension of a brand.

This can have reputational consequences if biases or bad behaviour come to light.

A personality-free bot, on the other hand, may be a lower-risk proposition.

Back to basics

As the makers of digital assistants seek to find ways to make their bots more reflective of today’s world, perhaps the answer is simple.

By letting bots just be bots, we remove the risk of embedding stereotypes and making missteps arising from our own biases.

When we take bots back to basics, all we have to worry about is solving the task at hand — and, as always, the data.

* Paul Barba is Chief Scientist at Lexalytics. He tweets at @PaulBarba_.

This article first appeared at venturebeat.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.