25 September 2023

Detecting deception: How the US military plans to defeat deepfakes

Start the conversation

Will Knight* says the United States military is funding an effort to catch deepfakes and other AI trickery, but technologists admit it could be a losing battle.


Photo: vchal

Think that AI will help put a stop to fake news?

The US military isn’t so sure.

The US Department of Defense is funding a project that will try to determine whether the increasingly real-looking fake video and audio generated by artificial intelligence (AI) might soon be impossible to distinguish from the real thing — even for another AI system.

This northern summer, under a project funded by the Defense Advanced Research Projects Agency (DARPA), the world’s leading digital forensics experts will gather for an AI fakery contest.

They will compete to generate the most convincing AI-generated fake video, imagery, and audio — and they will also try to develop tools that can catch these counterfeits automatically.

The contest will include so-called “deepfakes” — videos in which one person’s face is stitched on to another person’s body.

Rather predictably, the technology has already been used to generate a number of counterfeit celebrity porn videos.

But the method could also be used to create a clip of a politician saying or doing something outrageous.

DARPA’s technologists are especially concerned about a relatively new AI technique that could make AI fakery almost impossible to spot automatically.

Using what are known as generative adversarial networks, or GANs, it is possible to generate stunningly realistic artificial imagery.

“Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” says David Gunning, the DARPA program manager in charge of the project.

“We don’t know if there’s a limit. It’s unclear.”

A GAN consists of two components.

The first, known as the “actor,” tries to learn the statistical patterns in a dataset, such as a set of images or videos, and then generate convincing synthetic pieces of data.

The second, called the “critic,” tries to distinguish between real and fake examples.

Feedback from the critic enables the actor to produce ever more realistic examples.

And because GANs are designed to outwit an AI system already, it is unclear if any automated system could catch them.

GANs are relatively new, but they have taken the machine-learning scene by storm.

They can already be used to dream up very realistic imaginary celebrities or to convincingly modify images by changing a frown into a smile or turning night into day.

Detecting a digital forgery usually involves three steps.

The first is to examine the digital file for signs that two images or videos have been spliced together.

The second is to look at the lighting and other physical properties of the imagery for signs that something is amiss.

The third — which is the hardest to do automatically, and probably the hardest to defeat — is to consider logical inconsistencies, like the wrong weather for the supposed date, or incorrect background for the supposed location.

Walter Scheirer, a digital forensics expert at the University of Notre Dame who is involved with the DARPA project, says that the technology has come a surprisingly long way since the initiative was launched a couple of years ago.

“We are definitely in an arms race,” he says.

While it has long been possible for a skilled graphics expert to produce convincing-looking forgeries, AI will make the technology far more accessible.

“It’s gone from state-sponsored actors and Hollywood to someone on Reddit,” says Hany Farid, a professor at Dartmouth who specialises in digital forensics.

“The urgency we feel now is in protecting democracy.”

Deepfakes use a popular machine-learning technique known as deep learning to automatically incorporate a new face into an existing video.

When large amounts of data are fed into a very big or “deep” simulated neural network, a computer can learn to perform all sorts of useful tasks, such as very accurate face recognition.

But the same approach makes malicious video manipulation easier too.

A tool released online lets anyone with modest technical expertise generate new deepfakes.

And the creator of that tool told Motherboard that an even more user-friendly version is in the works.

The problem, of course, extends far beyond face-swapping.

Experts increasingly say that before long it may be much harder to know if a photo, video, or audio clip has been generated by a machine.

Google has even developed a tool called Duplex that uses AI to fake a phone call.

Aviv Ovadya, chief technologist at the University of Michigan’s Center for Social Media Responsibility, worries that the AI technologies now being developed might be used to harm someone’s reputation, influence an election, or worse.

“These technologies can be used in wonderful ways for entertainment, and also lots of very terrifying ways,” Ovadya said at a recent event organised by Bloomberg.

“You already have modified images being used to cause real violence across the developing world,” he said.

“That’s a real and present danger.”

If technology cannot be used to catch fakery and misinformation, there may be a push to use the law instead.

In fact, Malaysia introduced laws against fake news in April.

However, Dartmouth’s Farid says this may itself prove problematic because the truth is itself a slippery subject.

“How do you define fake news? It’s not as easy as you might think,” he says.

“I can crop an image and fundamentally change an image.”

“And what do you do with The Onion?”

* Will Knight is MIT Technology Review’s Senior Editor for Artificial Intelligence.

This article first appeared at www.technologyreview.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.