Tegan Jones* says an Australian-designed app that can positively ID a person, even if they don’t have a criminal record, is already being used by law enforcement.
An Australian startup has developed an app that can identify your name and address with a single photo.
It’s already being used by US law enforcement despite a lack of regulation and independent testing.
Until now, Clearview AI wasn’t a household name.
But a report by The New York Times early last week revealed it has been used by hundreds of law enforcement Agencies in the US — including the police and Federal Bureau of Investigation (FBI) — for a few years.
The app was created by Australian Hoan Ton-That, a developer who previously created an unsuccessful iPhone game and an app that added US President, Donald Trump’s hair to users’ photos.
Clearview AI allows the user to compare a photo with a three-billion-strong database of images scraped from Facebook, YouTube, Venmo and other social media sites.
This database is made up of regular people and is designed to positively ID a person, even if they don’t have a criminal record.
Law enforcement in the US revealed that Clearview AI has been used to solve cases from petty theft to murder.
Part of its success is because it doesn’t require a perfect image to identify a suspect.
“With Clearview, you can use photos that aren’t perfect,” said Detective Sergeant Nick Ferrara to The New York Times.
“A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.”
While this may seem positive, it raises the question of accuracy and whether the app could lead to false convictions.
Furthermore, defendants don’t have to be told about being identified by the app as long as it wasn’t the only evidence used for their arrest.
Ton-That admitted the system isn’t perfect, saying the app works up to 75 per cent of the time.
Most of the images in the database are taken at eye level, whereas security cameras tend to be mounted on walls and ceilings.
It is not publicly known how often false matches come up due to the lack of independent testing.
The company is said to only use publicly available images, such as public Facebook profiles, but changing your privacy settings or deleting images won’t necessarily stop photos of you ending up in its system.
There is currently no way to remove your photos from the Clearview database if your Facebook profile has already been scraped.
The company is apparently working on a tool to allow people to request image removal.
Clearview AI also has control over the image search results.
During research for his article, New York Times reporter Kashmir Hill initially saw images of himself come up in the system.
The results later disappeared.
“After the company realised I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches,” said Hill in the article.
“When asked about this, Ton-That laughed and called it a ‘software bug’.”
The images were later restored when the company began talking to Hill for the article.
Some of the images were from over 10 years ago and others were pictures the author had never seen before.
The photos still positively identified him when his nose and lower face were covered.
While Clearview AI isn’t publicly available, it’s potential for stalking is concerning.
The New York Times analysed the app code and found language that would allow it to be paired with augmented reality (AR) glasses that have the potential to identify people it sees in real time.
Despite lack of testing and legislation around facial recognition still being in the fledgling stages in the US, Clearview AI has reportedly been used by over 600 law enforcement Agencies over the past year alone without public knowledge.
* Tegan Jones is an editor at Gizmodo Australia. She tweets at @Tegan_Writes.
This article first appeared at www.gizmodo.com.au.