Greg Sandoval* says Google employees believe the hiring of the new head of AI is a slap in the face for staff who opposed the company’s involvement with the military.
When Google Cloud Chief, Diane Greene announced that Andrew Moore (pictured) would later this year replace Fei-Fei Li as head of artificial intelligence (AI) for Google Cloud, she mentioned he was Dean of the School of Computer Science at Carnegie Mellon University and that he formerly worked at Google.
What Greene didn’t mention was that Moore also is Co-Chairman of an AI task force created by the US Center for a New American Security (CNAS), a think tank with strong ties to the US military.
Moore’s co-chair on the task force is Robert Work, a former US Deputy Secretary of Defence, who The New York Times has called “the driving force behind the creation of Project Maven”, the US military’s effort to analyse data, such as drone footage, using AI.
Google’s involvement in Project Maven caused a huge backlash inside the company earlier this year, forcing CEO, Sundar Pichai to pledge that Google would never work on AI-enhanced weapons.
The hiring of Moore is sure to reignite debate about Google’s involvement in certain markets for artificial intelligence — one of the hottest areas of tech, with a massive business potential — and the relationship the company maintains with the military.
During his tenure at Carnegie Mellon, Moore has often discussed the role of AI in defensive and military applications, such as his 2017 talk on Artificial Intelligence and Global Security.
“We could afford it, if we wanted to and if we needed, to be surveilling pretty much the whole world with autonomous drones of various kinds,” Moore said.
“I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do.”
“This is now practical.”
CNAS, the organisation that formed the task force Moore co-chairs, focuses on national security issues and its stated mission is to “develop strong, pragmatic and principled national security and defence policies that promote and protect American interests and values”.
Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.
“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee.
“Googlers want less alignment with the military-industrial complex, not more.”
“This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”
A Google spokesman declined to comment.
A voice of caution on deploying AI in the real world
Moore, who was born in the United Kingdom but has since become a US citizen, has frequently spoken out about the need for caution in taking AI out of the lab and into the real world.
When the CNAS task force was announced in March, Moore stressed the importance of “ensuring that such systems work with humans in a way which empowers the human, not replaces the human, and which keeps ultimate decision authority with the human.”
And on a recent CNAS podcast, he described what he called his “conservative” view on AI in the real world: “Even if I knew that for instance launching a fleet of autonomous vehicles in a city would reduce deaths by 50 per cent, I wouldn’t want to launch it until I came across some formal proofs of correctness, which showed me that it was absolutely not going to be involved in unnecessary deaths.”
Still, he has not shied away from dealing with the military sector.
Moore’s Carnegie Mellon bio mentions past work involving “detection and surveillance of terror threats,” and he’s listed as fact-finding contributor on a September 2017 Naval Research Advisory Report on “Autonomous and Unmanned Systems in the Department of the Navy.”
During the 2017 talk on global security, he mentioned the possibility of incorporating digital personal assistants, such as those used in consumer gadgets made by Google and Amazon, into military applications.
“There is an open question as to whether and when and how we can develop personal assistants for warfighters and commanders to have that full set of information which helps remove the ‘fog of war,’ without getting in their way with too much information,” he said.
Life after ‘Maven’
Google hired Moore to oversee the AI efforts within Google Cloud, the unit that offers Google’s popular cloud-computing services, such as data storage, computing and machine learning.
He replaces Li, who has returned to her professorship at Stanford University.
His hiring comes as Google tries to move past the controversy that erupted when the company’s involvement in Project Maven became known.
Earlier this year, when word leaked that Google was assisting the military to analyse drone footage, thousands of Google employees signed a petition demanding that management end the company’s involvement.
Others refused to work on the project or leaked documents to reporters that proved embarrassing for management.
About a dozen employees resigned in protest.
In June, Google CEO Pichai appeared to yield to their demands.
He released a list of seven principles that would guide the company’s development of AI.
They included never building AI-enhanced weapons and ensuring AI is applied to applications that are socially beneficial, safe and won’t create unfair bias.
The company did not rule out working with the military on services that don’t violate the principles, such as email or data storage, for example.
The feeling of many of those opposed to Maven inside Google was that the company should not be involved in any way with the military.
And for at least some of Google’s staff who participated in the Maven protest — as well as for former employees sympathetic to their cause — Moore’s hiring will raise questions about Google’s commitment to those AI principles.
Moore himself has acknowledged the potential dangers of weaponised AI.
“Just as it’s a good thing that we’re able to do AI so quickly,” he said during the 2017 talk, AI is also a “threat.”
“Just as one of our genius grad students can come up with something quickly, so can someone less desirable.”
“And we have to be ready for that in what we’re doing,” Moore said.
* Greg Sandoval is Business Insider’s Google reporter in San Francisco. He tweets at @sandoNET.
This article first appeared at www.businessinsider.com.au.