27 September 2023

Get a grip: If AI’s so smart, why can’t it grasp cause and effect?

Start the conversation

Will Knight* says deep-learning models can spot patterns that humans can’t, but software still can’t explain why things happen.


Here’s a troubling fact.

A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child who’s just learning to walk.

A new experiment shows how difficult it is for even the best artificial intelligence (AI) systems to grasp rudimentary physics and cause and effect.

It also offers a path for building AI systems that can learn why things happen.

The experiment was designed “to push beyond just pattern recognition”, says Josh Tenenbaum, a professor at MIT’s Center for Brains Minds & Machines, who led the work.

“Big tech companies would love to have systems that can do this kind of thing.”

The most popular cutting-edge AI technique, deep learning, has delivered some stunning advances in recent years, fuelling excitement about the potential of AI.

It involves feeding a large approximation of a neural network copious amounts of training data.

Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition.

But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems.

It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and what’s going on.

The questions and answers are labelled, similar to how an AI system learns to recognise a cat by being shown hundreds of images labelled “cat”.

Systems that use advanced machine learning exhibited a big blind spot.

Asked a descriptive question such as “What colour is this object?”, a cutting-edge AI algorithm will get it right more than 90 per cent of the time.

But when posed more complex questions about the scene, such as “What caused the ball to collide with the cube?” or “What would have happened if the objects had not collided?”, the same system answers correctly only about 10 per cent of the time.

David Cox, Director of the MIT-IBM Watson AI Lab, which was involved with the work, says understanding causality is fundamentally important for AI.

“We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.”

A lack of causal understanding can have real consequences, too.

Industrial robots can increasingly sense nearby objects, in order to grasp or move them.

But they don’t know that hitting something will cause it to fall over or break unless they’ve been specifically programmed — and it’s impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasn’t been programmed to understand.

The same is true for a self-driving car.

It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill on to the road.

Causal reasoning would be useful for just about any AI system.

Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions.

Causal reasoning is of growing interest to many prominent figures in AI.

“All of this is driving towards AI systems that can not only learn but also reason,” Cox says.

The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting.

“The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning,” he says.

Besides showing weaknesses in existing AI programs, Tenenbaum and his colleagues built a new kind of AI system capable of learning about cause and effect that scores much higher on their intelligence test.

Their approach combines several AI techniques.

The system uses deep learning to recognise objects in a scene.

The output of this is fed to software that builds a 3D model of the scene and how objects interact.

The approach requires more hand-built components than many machine learning algorithms, and Tenenbaum cautions that it’s brittle and won’t scale well.

But it seems to suggest that a mix of approaches — along with some new ideas — will be needed to take AI forward.

“Our minds build causal models and use these models to answer arbitrary queries, while the best AI systems are far from emulating these capabilities,” says Brenden Lake, an assistant professor of psychology and data science at NYU.

Samuel Gershman, an associate professor at Harvard who has collaborated with Tenenbaum on other projects, adds that approaching human intelligence will be impossible for machines without some grasp of causal reasoning.

He points to a well-known medical fact — that women are less likely to die from increased alcohol use than men.

“An AI system with no notion of causality might infer that the way to reduce mortality is to administer sex-change operations to men,” he says.

* Will Knight is a senior writer for WIRED, covering artificial intelligence. He tweets at @willknight.

This article first appeared at www.wired.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.