Casey Newton* says there is real reason to be worried about AI, but not for the reasons so often depicted in science fiction.
CEOs of artificial intelligence companies usually seek to minimise the threats posed by AI, rather than play them up.
But Clara Labs co-founder and CEO, Maran Nelson tells us there is real reason to be worried about AI — and not for the reasons that science fiction has trained us to expect.
Movies like Her and Ex Machina (pictured) depict a near future in which anthropomorphic artificial intelligences manipulate our emotions and even commit violence against us.
But threats like Ex Machina’s Ava will require several technological breakthroughs before they’re even remotely plausible, Nelson says.
And in the meantime, actual state-of-the-art AI — which uses machine learning to make algorithmic predictions — is already causing harm.
“Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information,” Nelson says.
AI predictions about which articles you might want to read contributed to the spread of misinformation on Facebook and the 2008 Global Financial Crisis, Nelson says.
And because algorithms operate invisibly — unlike Ava and other AI characters in fiction — they’re more pernicious.
“It’s important always to give the user greater control and greater visibility than they had before you implemented systems like this,” Nelson says.
And yet, increasingly, AI is designed to make decisions for users without asking them first.
Clara’s approach to AI is innocuous to the point of being dull: it makes a virtual assistant that schedules meetings for people.
(This week, it added a bunch of integrations designed to position it as a tool to aid in hiring.)
But even seemingly simple tasks still routinely trip up AI.
“The more difficult situations that we often interact with are, ‘Next Wednesday would be great — unless you can do in-person, in which case we’ll have to bump it a couple of weeks based on your preference. Happy to come to your offices.’”
Even a state-of-the-art AI can’t process this message with a high degree of confidence — so Clara hires people to check the AI’s work.
It’s a system known as “human in the loop” — and Nelson says it’s essential to building AI that is both powerful and responsible.
Nelson sketches out her vision for a better kind of AI.
“My big idea is that science fiction has really hurt the chances that we’re going to get scared of AI when we should.”
“Almost every time people have played with the idea of an AI [in fiction] and what it will look like, and what it means for it to be scary, it’s been tremendously anthropomorphised.”
“You have this thing — it comes, it walks at you, and it sounds like you’re probably going to die, or it made it very clear that there’s some chance your life is in jeopardy.”
“The thing that scares me the most about that is not the likelihood that in the next five years something like this will happen to us, but the likelihood that it will not.”
She says therefore the idea of HAL from 2001: A Space Odyssey is distracting people from what the actual threats are.
Nelson provides another example in the form of the GFC.
“There you have another situation where there are people who are building risk models about what they can do with money.”
“Then they’re giving those risk models, which are in effect models like the ones that are powering Facebook News Feed and all of these other predictive models, and they’re giving them to bankers.”
“And they’re saying, ‘Hey, bankers, it seems like maybe these securitisation loans in the housing, it’s going to be fine’.”
“It’s not going to be fine. It’s not at all!”
“They’re dealing with a tremendous amount of uncertainty, and … in both of these cases, as with News Feed, with the securitisation loans, it is the consumers who end up taking the big hit because the corporation itself has no real accountability structure.”
These days companies of all sizes wave AI around as a magic talisman, and the moment they say, “Well, don’t worry, we put AI on this,” we’re all supposed to relax and say, “Oh, well the computers have this handled.”
But, as Nelson points out, these models can be very bad at predicting things.
Or they predict the wrong things.
“When you start to interact with consumers and have a product like ours that is largely AI, there is a real fear factor,” Nelson says.
“What does that mean? What does it mean that I’m giving up or giving away?”
“It’s important always to give the user greater control and greater visibility than they had before you implemented systems like this.”
* Casey Newton is Silicon Valley Editor at The Verge. He tweets at @CaseyNewton and his website is cnewton.org.
This article first appeared at www.theverge.com.