Gerd Gigerenzer* says that by emulating the human ability to forget some of the data, psychological AIs will transform algorithmic accuracy.
Our brain has evolved to make predictions and explanations in unstable and ill-defined situations.
For instance, to understand a novel situation, the brain generates a single explanation on the fly.
If this explanation is upturned by additional information, a second explanation is generated.
Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels.
It views the world as a fixed space of possibilities, enumerating and weighing them all.
This approach, of course, has achieved notable successes when applied to stable and well-defined situations such as chess or computer games.
When such conditions are absent, however, machines struggle.
One such example is virus epidemics.
In 2008, Google launched Flu Trends, a web service that aimed to predict flu-related doctor visits using big data.
The project, however, failed to predict the 2009 swine flu pandemic.
After several unsuccessful tweaks to its algorithm, Google finally shuttered the project in 2015.
In such unstable situations, the human brain behaves differently.
Sometimes, it simply forgets.
Instead of getting bogged down by irrelevant data, it relies solely on the most recent information.
This is a feature called intelligent forgetting.
Adopting this approach, an algorithm that relied on a single data point—predicting that next week’s flu-related doctor visits are the same as in the most recent week, for instance—would have reduced Google Flu Trends’ prediction error by half.
Intelligent forgetting is just one dimension of psychological AI, an approach to machine intelligence that also incorporates other features of human intelligence such as causal reasoning, intuitive psychology, and physics.
In 2023, this approach to AI will finally be recognized as fundamental for solving ill-defined problems.
Exploring these marvellous features of the evolved human brain will finally allow us to make machine learning smart.
Indeed, researchers at the Max Planck Institute, Microsoft, Stanford University, and the University of Southampton are already integrating psychology into algorithms to achieve better predictions of human behaviour, from recidivism to consumer purchases.
One feature of psychological AI is that it is explainable.
Until recently, researchers assumed that the more transparent an AI system was, the less accurate its predictions were.
This mirrored the widespread but incorrect belief that complex problems always need complex solutions.
In 2023, this idea will be laid to rest.
As the case of flu predictions illustrates, robust and simple psychological algorithms can often give more accurate predictions than complex algorithms.
Psychological AI opens up a new vision for explainable AI: Instead of trying to explain opaque complex systems, we can check first if psychological AI offers a transparent and equally accurate solution.
In 2023, deep learning in itself will come to be seen as a cul-de-sac.
Without the help of human psychology, it will become clearer that the application of this type of machine learning to unstable situations eventually runs up against insurmountable limitations.
We will finally recognize that more computing power makes machines faster, not smarter.
One such high-profile example is self-driving cars.
The vision of building the so-called level-5 cars—fully automated vehicles capable of driving safely under any conditions without human backup—has already hit such a limitation.
Indeed, I predict that in 2023, Elon Musk will retract his assertion that this category of self-driving cars is just around the corner.
Instead, he will refocus his business on creating the much more viable (and interesting) level-4 cars, which are able to drive fully autonomously, without human help, only in restricted areas such as motorways or cities specifically designed for self-driving vehicles.
Widespread adoption of level-4 cars will instead spur us to redesign our cities, making them more stable and predictable, and barring potential distractions for human drivers, cyclists, and pedestrians.
If a problem is too difficult for a machine, it is we who will have to adapt to its limited abilities.
*Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam, and author of How to Stay Smart in a Smart World.
This article first appeared at wired.co.uk