John Timmer* says researchers are improving autonomous vehicles by helping them decide whether fellow drivers are selfish or altruistic.
Imagine you’re trying to make a left turn on to a busy road.
Car after car rolls past, keeping you trapped as your frustration rises.
Finally, a generous driver decelerates enough to create a gap.
A check of the traffic from the opposite direction, a quick bit of acceleration, and you’re successfully merged into traffic.
This same scene plays out across the world countless times a day.
And it’s a situation where inferring both the physics and the motives of your fellow drivers is difficult.
Now imagine throwing autonomous vehicles into the mix.
These are typically limited to evaluating only the physics and to making conservative decisions in situations where information is ambiguous.
Now, a group of computer scientists has figured out how to improve autonomous vehicle (AV) performance in these circumstances.
The scientists have essentially given their AVs a limited theory of mind, allowing the vehicles to better interpret what the behaviours of their nearby human drivers are telling them.
Mind the theory
Theory of mind comes so easily to us that it’s difficult to recognise how rare it is outside of our species.
We’re easily able to recognise that our fellow humans have minds like our own, and we use that recognition to infer things like the state of their knowledge and their likely motivations.
These inferences are essential to most of our social activities, driving included.
While a friendly wave can make for an unambiguous signal that your fellow driver is offering you space in their lane, we can often make inferences based simply on the behaviour of their car.
And, critically, autonomous vehicles aren’t especially good at this.
In many cases, their own behaviour doesn’t send signals back to other drivers.
A study of accidents involving AVs in California indicated that over half of them involved the AV being rear-ended because a human driver couldn’t figure out what in the world it was doing.
(Volvo, among others, is working to change that.)
It’s unrealistic to think that we’ll give AVs a full-blown theory of mind any time soon.
AIs are simply not that advanced, and it would be excessive for cars, which only have to deal with a limited range of human behaviours.
But a group of researchers at MIT and Delft University of Technology has decided that putting an extremely limited theory of mind in place for certain driving decisions, including turns and merges, should be possible.
The idea behind the researchers’ work, described in a new paper in PNAS, involves a concept called social value orientation, which is a way of measuring how selfish or community-oriented an individual’s actions are.
While there are undoubtedly detailed surveys that can provide a meticulous description of a person’s social value orientation, autonomous vehicles generally won’t have the time to be giving their fellow drivers surveys.
So the researchers distilled social value orientation into four categories: altruists, who try to maximise the enjoyment of their fellow drivers; prosocial drivers, who try to take actions that allow all other drivers to benefit (which may occasionally involve selfishly flooring it); individualists, who maximise their own driving experience; and competitive drivers, who only care about having a better driving experience than those around them.
Value-oriented
The researchers developed a formula that would let them calculate the expected driving trajectory for each of these categories given the starting position of other cars.
The autonomous vehicle was programmed to compare the trajectories of actual drivers to the calculated version and use that to determine which of the four categories the drivers were likely to be in.
Given that classification, the vehicle could then project what their future actions would be.
As the researchers wrote, “We extend the ability of AVs’ reasoning by incorporating estimates of the other drivers’ personality and driving style from social cues.”
This is substantially different from some game-theory work that’s been done in the area.
That work has assumed that every driver is always maximising their own gain; if altruism emerges, it’s only incidental to this maximisation.
This new work, in contrast, bakes altruistic behaviour into its calculations and recognises that drivers are complicated and may change their tendencies as situations evolve.
In fact, previous studies had indicated that in contexts other than driving, about half of the people tested showed prosocial behaviour, with another 40 per cent being selfish.
With the system in place, the researchers obtained data on vehicle locations and trajectories as drivers merged on to a highway, a situation that often requires the generosity of fellow drivers.
With the social value orientation system in place, the autonomous driver was able make more accurate predictions of its fellow drivers’ trajectories than it could without – prediction errors dropped by 25 per cent.
The system also worked on lane changes on crowded freeways, as well as turns into traffic.
Using these evaluations, the researchers could also make some inferences using the traffic patterns they had.
For example, they found that a highway driver may start out selfishly following the car in front of them, shift to altruistic as they decelerate to allow a driver to merge, then switch right back to a selfish approach.
Similarly, drivers facing a merge on to a freeway typically ended up being competitive – something you see every time a vehicle pulls out and slows down everyone who was stuck in the lane behind it.
While we’re still a long way off from giving autonomous vehicles a general AI or a full theory of mind, the research shows that you can get significant benefits from giving AVs a very limited one.
And it’s a nice demonstration that if we want any autonomous system to integrate with something that’s currently a social activity, then paying attention to what social scientists have figured out about those activities can be incredibly valuable.
* John Timmer is Ars Technica’s Science Editor. He tweets at @j_timmer.
This article first appeared at arstechnica.com/science