Jillian Kramer* says research has shown that humans communicate better with one another after working with robots that show their vulnerable side.
Robots are more prevalent in daily life than ever before.
Digital assistants control smartphone apps, while physical bots teach students in schools, sanitise hospitals and deliver food.
Scientists have long been studying human–robot interactions to learn how these machines can influence individuals’ behaviour, such as altering how well someone completes a task or responds to a robotic request.
But new research shows the presence and actions of robots also affect the way humans relate to other humans — in this case, swaying team members to communicate better.
“While other work has focused on how to more easily integrate robots into teams, we focused instead on how robots might positively shape the way that people react to each other,” says Sarah Sebo, a graduate student at Yale University and co-author of the research, published last month in Proceedings of the National Academy of Sciences USA.
To measure these changes in reactions, researchers at Yale and Cornell University assigned participants to teams of four — consisting of three people and one small humanoid robot — and had them play a collaborative game on Android tablets.
In some groups, the robots were programmed to act “vulnerable”.
These machines performed actions such as apologising for making mistakes, admitting to self-doubt, telling jokes, sharing personal stories about their “life” and talking about how they were “feeling”.
In control groups, the human participants teamed up with robots that made only neutral statements or remained entirely silent.
The researchers monitored how group members’ communication differed depending on which type of robot was on each team.
They found that people working with robots that showed vulnerability spent more time talking with their fellow humans than did those in the control groups.
Subjects with vulnerable robots also divided their conversation more equally between each human member of the team.
These participants later reported that they perceived their experience as more positive, compared with those in the control groups.
“We believe the robot’s vulnerable utterances helped the group to feel more comfortable in a task that was designed to have a high level of tension,” Sebo says.
“As a result, people talked more and more over time and viewed the entire interaction more favourably.”
Farshid Amirabdollahian, a professor of human–robot interaction at the University of Hertfordshire in England, who was not involved with this study, says the research provides more evidence that “social behaviour engineering for robots can affect their utility and influences on others”.
In other words, by changing the actions of intelligent machines, developers can alter the behaviour of the people who interact with those machines.
Amirabdollahian adds that future research should look at whether that influence is sustained over a longer period.
“What would happen,” he asks, “if human participants experience the same interesting utterances over a longer period of time — for example, weeks of companionship?”
He also says future research should examine how different categories of robot vulnerability — such as apology, storytelling or humour — influence human-to-human responses and conversation.
But this study’s findings alone might prove useful in real-world situations.
Humans already interact with many digital discussion partners — think Apple’s Siri, Amazon’s Alexa or Google Home — on a daily basis.
Margaret Traeger, a PhD candidate at Yale and study co-author, suggests scientists who develop both online and physically embodied robots should consider their creations’ effect on human-to-human interactions as part of the design process.
Malte Jung, an assistant professor in information science at Cornell and a co-author of the study, says communicative robots could fundamentally change human behaviour for the better.
“Our work shows that technology has the potential to support teams by acting on their social dynamics,” he says.
Instead of merely reducing the amount of work employees do, these machines could make people more efficient, subtly influencing social dynamics to “help teams perform at their best”.
* Jillian Kramer is a freelance journalist. She tweets at @jilliankramer. Her website is jilliankramer.com.
This article first appeared at www.scientificamerican.com.