Research has found that people trust AI-generated faces more than real faces, Louis B. Rosenberg* says this is terrifying when we factor in the new forms of advertising that the metaverse is set to unleash.
Early last year a chilling academic study was published by researchers at Lancaster University and UC Berkeley.
Using a sophisticated form of AI known as a GAN (Generative Adversarial Network) they created artificial human faces (i.e. photorealistic fakes) and showed these fakes to hundreds of human subjects along with a mix of real faces.
They discovered that this type of AI technology has become so effective, we humans can no longer tell the difference between real people and virtual people (or “veeple” as I call them).
And that wasn’t their most frightening finding.
You see, they also asked their test subjects to rate the “trustworthiness” of each face and discovered that consumers find AI-generated faces to be significantly more trustworthy than real faces.
As I describe in a recent academic paper, this result makes it extremely likely that advertisers will extensively use AI-generated people in place of human actors and models.
That’s because working with virtual people will be cheaper and faster, and if they’re also perceived as more trustworthy, they’ll be more persuasive too.
This is a troubling direction for print and video ads, but it’s downright terrifying when we look to the new forms of advertising that the metaverse will soon unleash.
As consumers spend more time in virtual and augmented worlds, digital advertising will transform from simple images and videos to AI-driven virtual people that engage us in promotional conversation.
Armed with an expansive database of personal information about our behaviours and interests, these “AI-driven conversational agents” will be profoundly effective advocates for whatever messaging a third party is paying them to deliver.
And if this technology is not regulated, these AI agents will even track our emotions in real time, monitoring our facial expressions and vocal inflections so they can adapt their conversational strategy (i.e. their sales pitch) to maximize their persuasive impact.
While this points to a somewhat dystopian metaverse, these AI-driven promotional avatars would be a legitimate use of virtual people.
But what about the fraudulent uses?
This brings me to the topic of identity theft.
In a recent Microsoft blog post by Executive VP Charlie Bell, he states that in the metaverse fraud and phishing attacks could “come from a familiar face — literally — like an avatar that impersonates your coworker.”
I completely agree.
In fact, I worry that the ability to hijack or duplicate avatars could destabilize our sense of identity, leaving us perpetually unsure if the people we’re talking to are the individuals we know or quality fakes.
Accurately replicating the look and sound of a person in the metaverse is often referred to as creating a “digital twin.”
Earlier this year, Jensen Haung, the CEO of NVIDIA gave a keynote address using a cartoonish digital twin.
He stated that the fidelity will rapidly advance in the coming years as well as the ability for AI engines to autonomously control your avatar so you can be in multiple places at once.
Yes, digital twins are coming.
Which is why we need to prepare for what I call “evil twins” – accurate virtual replicas of the look, sound, and mannerisms of you (or people you know and trust) that are used against you for fraudulent purposes.
This form of identity theft will happen in the metaverse, as it’s a straightforward amalgamation of current technologies developed for deep-fakes, voice emulation, digital-twinning, and AI driven avatars.
And the swindlers may get quite elaborate.
According to Bell, bad actors could lure you into a fake virtual bank, complete with a fraudulent teller that asks you for your information.
Or fraudsters bent on corporate espionage could invite you into a fake meeting in a conference room that looks just like the virtual conference room you always use.
From there, you will give up confidential information to unknown third parties without even realizing it.
Personally, I suspect imposters will not need to get this elaborate.
After all, encountering a familiar face that looks, sounds, and acts like a person you know is a powerful tool by itself.
This means that metaverse platforms need equally powerful authentication technologies that validate whether we’re interacting with an actual person (or their authorized twin) and not an evil twin that was fraudulently deployed to deceive us.
If platforms do not address this issue early on, the metaverse could collapse under an avalanche of deception and identity theft.
Whether you’re looking forward to the metaverse or not, major platforms are headed our way.
And because the technologies of virtual reality and augmented reality are designed to fool the senses, these platforms will skilfully blur the boundaries between the real and the fabricated.
When used by bad actors, such capabilities will get dangerous fast.
This is why it’s in everyone’s best interest, consumers and corporations alike, to push for tight security.
The alternative will be a metaverse filled with rampant fraud, a consequence it may never recover from.
*Louis B. Rosenberg is CEO of Unanimous AI, a computer scientist, entrepreneur, and prolific inventor. Thirty years ago, while working as a researcher at Stanford and Air Force Research Laboratory, Rosenberg developed the first functional augmented reality system.
This article first appeared at venturebeat.com.