Karen Hao* says that in Buenos Aires, the first known facial recognition system of its kind is hunting down minors who appear in a national database of alleged offenders.
In a national database in Argentina, tens of thousands of entries detail the names, birthdays, and national IDs of people suspected of crimes.
The database, known as the Consulta Nacional de Rebeldías y Capturas (National Register of Fugitives and Arrests), or CONARC, began in 2009 as a part of an effort to improve law enforcement for serious crimes.
But there are several things off about CONARC.
For one, it’s a plain-text spreadsheet file without password protection, which can be readily found via Google Search and downloaded by anyone.
For another, many of the alleged crimes, like petty theft, are not that serious—while others aren’t specified at all.
Most alarming, however, is the age of the youngest alleged offender, identified only as M.G., who is cited for “crimes against persons (malicious)—serious injuries.” M.G. was apparently born on October 17, 2016, which means he’s a week shy of four years old.
Top of Form
Bottom of Form
Now a new investigation from Human Rights Watch has found that not only are children regularly added to CONARC, but the database also powers a live facial recognition system in Buenos Aires deployed by the city government.
This makes the system likely the first known instance of its kind being used to hunt down kids suspected of criminal activity.
“It’s completely outrageous,” says Hye Jung Han, a children’s rights advocate at Human Rights Watch, who led the research.
Buenos Aires first began trialing live facial recognition on April 24, 2019. Implemented without any public consultation, the system sparked immediate resistance.
In October, a national civil rights organisation filed a lawsuit to challenge it. In response, the government drafted a new bill—now going through legislative processes—that would legalise facial recognition in public spaces.
The system was designed to link to CONARC from the beginning.
While CONARC itself doesn’t contain any photos of its alleged offenders, it’s combined with photo IDs from the national registry.
The software uses suspects’ headshots to scan for real-time matches via the city’s subway cameras.
Once the system flags a person, it alerts to the police to make an arrest.
The system has since led to numerous false arrests (links in Spanish), which the police have no established protocol for handling.
One man who was mistakenly identified was detained for six days and about to be transferred to a maximum security prison before he finally cleared up his identity.
Another was told he should expect to be repeatedly flagged in the future even though he’d proved he wasn’t who the police were looking for.
To help resolve the confusion, they gave him a pass to show to the next officer that might stop him.
“There seems to be no mechanism to be able to correct mistakes in either the algorithm or the database,” Han says.
“That is a signal to us that here’s a government that has procured a technology that it doesn’t understand very well in terms of all the technical and human rights implications.”
All this is already deeply concerning, but adding children to the equation makes matters that much worse.
Though the government has publicly denied (link in Spanish) that CONARC includes minors, Human Rights Watch found at least 166 children listed in various versions of the database between May 2017 and May 2020.
Unlike M.G., most of them are identified by full name, which is illegal.
Under international human rights law, children accused of a crime must have their privacy protected throughout the proceedings.
Also unlike M.G., most were 16 or 17 at time of entry—though, mysteriously, there have been a few one- to three-years-olds.
The ages aren’t the only apparent errors in the children’s entries.
There are blatant typos, conflicting details, and sometimes multiple national IDs listed for the same individual.
Because kids also physically change faster than adults, their photo IDs are more at risk of being outdated.
On top of this, facial recognition systems, under even ideal laboratory conditions, are notoriously bad at handling children because they’re trained and tested primarily on adults.
The Buenos Aires system is no different. According to official documents (link in Spanish), it was tested only on the adult faces of city government employees before procurement.
Prior US government tests of the specific algorithm that it is believed to be using also suggest it performs worse by a factor of six on kids (ages 10 to 16) than adults (ages 24 to 40).
All these factors put kids at a heightened risk for being misidentified and falsely arrested.
This could create an unwarranted criminal record, with potentially long-lasting repercussions for their education and employment opportunities.
It might also have an impact on their behaviour.
“The argument that facial recognition produces a chilling effect on the freedom of expression is more amplified for kids,” says Han.
“You can just imagine a child [who has been falsely arrested] would be extremely self-censoring or careful about how they behave in public.
“And it’s still early to try and figure out the long-term psychological impacts—how it might shape their world view and mindset as well.”
While Buenos Aires is the first city Han has identified using live facial recognition to track kids, she worries that many other examples are hidden from view.
In January, London announced that it would integrate live facial recognition into its policing operations.
Within days, Moscow said it had rolled out a similar system across the city.
Though it’s not yet known whether these systems are actively trying to match children, kids are already being affected.
In the 2020 documentary Coded Bias, a boy is falsely detained by the London police after live facial recognition mistakes him for someone else.
It’s unclear whether the police were indeed looking for a minor or someone older.
Even those who are not detained are losing their right to privacy, says Han: “There’s all the kids who are passing in front of a facial-recognition-enabled camera just to access the subway system.”
It’s often easy to forget in debates about these systems that children need special consideration. But that’s not the only reason for concern, Han adds.
“The fact that these kids would be under that kind of invasive surveillance—the full human rights and societal implications of this technology are still unknown.”
Put another way: what’s bad for kids is ultimately bad for everyone.
*Karen Hao is the artificial intelligence senior reporter for MIT Technology Review.
This article first appeared at technologyreview.com