27 September 2023

Inhuman domain: Why judging AI against human values misses the point

Start the conversation

Ben Dickson* says trying to describe AI in terms of human characteristics can lead to wrong interpretations and unrealistic expectations.


Photo: Alessio Ferretti

In a recent essay for The New York Times, famous mathematician Steven Strogatz praised the recently published performance results of AlphaZero, the board game–playing AI developed by DeepMind, a British AI company acquired by Google in 2014.

While his examination of AlphaZero’s findings is an interesting read, some of the conclusions Strogatz draws about the general advances in AI are problematic.

“[AlphaZero] clearly displays a breed of intellect that humans have not seen before, and that we will be mulling over for a long time to come,” Strogatz writes.

Further down, he writes, “By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever.”

Strogratz also stated that AlphaZero “seemed to express insight” and described its gameplay as intuitive, beautiful and romantic.

Strogratz’s praise for AlphaZero’s innovation is understandable.

Its achievements were among the most impressive AI developments of 2017.

The problem with his essay is that he is trying to describe AI and deep learning in terms of human characteristics.

This kind of thinking can lead to wrong interpretations of technological achievements and unrealistic expectations of AI innovations.

Anthropomorphising deep learning

Anthropomorphising AI is a problem that has been all too common.

For decades, we have tried to create correspondences between the functionalities of AI and the human brain.

We like to think that in the future, AI will be able to replicate the abstract thinking of the human mind.

We try to give think of AI algorithms as beings that can love (Her and Wall-E), hate (HAL 9000), have evil ambitions (Matrix), make sacrifices for friends ( Big Hero 6, Terminator 2) and manifest many other types of human emotions and behaviours.

Those examples all relate to works of fiction.

The audience knows beyond the shadow of a doubt that what they’re seeing and reading about are not even remotely possible.

However, when it comes to contemporary technology, anthropomorphising AI can have more direct consequences.

To be fair, there’s ample reason to humanise machine learning and deep learning.

Deep learning and its underlying technology, artificial neural networks, have been able to solve problems that have been historically challenging for classical approaches to creating software.

Neural networks can also interact with humans in their own language in ways that were previously impossible.

Thanks to advances in deep learning and the use cases it has unlocked, the interactions between humans and computers have changed immensely.

In this regard, AlphaZero has even more merit than many of the other achievements of deep learning.

First, it uses zero input from humans (hence the name) and “learns” board games from scratch by playing against itself.

Second, AlphaZero has, after a fashion, overcome one of the known limits of deep learning.

Most deep learning algorithms can become very good at performing a task they’ve been trained for, but terrible at doing anything that falls out of their narrow domain.

For instance, a neural network trained to play chess will be of no use in playing Go.

AlphaZero has managed to generalise, to a certain degree, the automation of board games.

Why it’s wrong to humanise deep learning

But in spite of all its marvels, AlphaZero is nowhere comparable to the human mind.

There’s nothing intuitive, beautiful and romantic about its gameplay.

As AI expert and venture capitalist Kai-Fu explains in his acclaimed book AI Superpowers: “With all of the advances in machine learning, the truth remains that we are still nowhere near creating AI machines that feel any emotions at all.”

“Can you imagine the elation that comes from beating a world champion at the game you’ve devoted your whole life to mastering?”

“AlphaGo did just that, but it took no pleasure in its success, felt no happiness from winning, and had no desire to hug a loved one after its victory.”

AlphaZero also did “master the principles of chess,” but not in the same way that a human grandmaster would.

The ingenuity of AlphaZero was that its creators managed to develop tricks that helped it go through self-play without getting stuck.

But it’s not magic.

It’s the right tuning of neural networks and Monte Carlo tree search.

AlphaZero doesn’t appreciate its wins.

It’s not using tactics in the sense that humans do.

It doesn’t have a mental model of the game.

It’s just optimising to produce a certain type of result: a winning move.

And let’s not forget that board games are nearly as complex as some of the other domains that neural networks and deep learning algorithms have ventured in.

In board games, players have full knowledge of the entire environment and they take turns at making moves. This is a common denominator between chess, shogi and Go, all three games that AlphaZero has mastered.

Basically, you can train the same network on pictures of board states of different games and obtain acceptable results.

The same can’t be said of other areas where deep learning is being applied, such as self-driving cars or even other games such as poker.

None of this means that AlphaZero, or other similar applications are to be underestimated and devalued.

They’re some of the most important and powerful developments of our age.

But that doesn’t mean we should start humanising deep learning and drawing wrong conclusions.

At the end of his article, Strogratz suggests AlphaZero will possibly evolve “into a more general problem-solving algorithm.”

AlphaZero is a statistical beast and it can master board games because they can be well represented in statistical terms.

But general problem-solving requires commonsense and abstract thinking, characteristics that are still exclusive to the human mind.

The leading voices in AI believe we’re nowhere near creating “general AI” — computers that can match the intellectual and thinking skills of humans.

Then again, when you describe deep learning neural networks in terms that apply to humans, it’s easy to think that they can soon solve any possible problem.

* Ben Dickson is a software engineer and the founder of TechTalks. He tweets at @bendee983.

This article first appeared at bdtechtalks.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.