27 September 2023

Tech companies close the door on facial recognition

Start the conversation

Tina Casey* says IBM’s move to stop development on facial recognition technology may have been well intentioned but the cat’s already out of the bag.


IBM set an example for other leading tech brands earlier this week when CEO Arvind Krishna announced that the company would drop its facial recognition business over the issue of race and gender bias.

However, scores of other companies are active in the field aside from the well-known brands regularly making headlines.

Unless legislators act, IBM’s announcement will have little impact on the use of facial recognition technology by both public law enforcement agencies and private companies.

Racial bias in facial recognition technology

Modern facial recognition technology is relatively new, and its use is almost entirely unregulated, despite its potential for negative, life-changing impacts on individuals as well as broader implications for civil liberties.

In earlier, pre-Internet days, facial recognition researchers had to undertake the tedious process of collecting informed consent from people whose photographs they used.

As a result, earlier datasets consisted of just a few hundred people.

To a growing degree, research on facial recognition technology now involves gathering millions of photographs from social media sites and other online sources.

It would seem logical that a much larger dataset would yield more accurate results.

However, last year the National Institute of Standards and Technology (NIST) surveyed 189 facial recognition algorithms from 99 different vendors representing a majority of companies in the field.

NIST found both race- and gender-based inaccuracies, with the most significant error rate having an impact on Black women.

A false positive can simply be a matter of convenience — your face won’t unlock your phone, for example.

However, in law enforcement, security and immigration applications, a false positive could change the trajectory of a wrongly identified person’s life, targeting them for additional surveillance and investigation, arrest, legal proceedings, incarceration, family separation or deportation.

IBM tried to fix the bias problem in 2019

The NIST researchers were careful to note that not all of the 189 algorithms in their study showed significant bias. “Different algorithms perform differently” is the takeaway from their study.

Nevertheless, IBM has been among those acknowledging an overall bias problem in the field.

In an in-depth report last year, Olivia Solon of NBC News described how IBM scoured a database released by the photo-sharing site Flickr to collect a set of approximately 1 million images, which it then pitched to researchers as a bias-reducing tool.

IBM’s collection was only one among many others used by facial recognition researchers, in academia and in the private sector.

Nevertheless, the backlash was furious as IBM’s high profile underscored broad issues of privacy, civil rights, consent and bias.

As Solon noted, the episode also drew attention to IBM’s development of other tools.

“Civil liberty advocates and tech ethics researchers have…questioned the motives of IBM, which has a history of selling surveillance tools that have been criticized for infringing on civil liberties,” she wrote, referencing the company’s camera and video tools that identify skin tone and ethnicity.

In 2020, IBM attempts another fix

That negative attention may have sensitized IBM to the relationship between racial bias and broader issues of civil rights, personal privacy, and overreach by law enforcement agencies that have been brought to the forefront by waves of protest following the death of George Floyd at the hands of a Minneapolis police officer over a $20 bill.

Last year’s experience may have also prompted IBM to assess its business through the lens of brand reputation, and its ability to recruit top talent as a more diversified generation of youth enters the job market.

In addition, IBM probably saw the writing on the wall.

Public pressure for a legislative fix has already begun building in recent years, as startups began promoting their technology willy-nilly to immigration and law enforcement authorities as well as private-sector companies.

From a brand reputation perspective, Krishna’s announcement was a pre-emptive move aimed at positioning IBM at the forefront of legislative reform.

In fact, when Krishna announced that IBM would drop facial recognition from its corporate profile, he also wrote a 900-word letter to Congress that detailed concrete action steps for reform.

Krishna pledged that IBM would work with lawmakers in support of federal regulations on police conduct, including the linkage of federal aid to transparency in reporting use of force.

He also advocated for a national policy on the use of technology in law enforcement, including body cameras and data analysis, aimed at increasing transparency and creating a system of accountability.

Brand reputation and the next generation of top talent

Above all, Krishna zeroed in on the talent recruitment issue.

“Expanding opportunity — training and education for in-demand skills is key to expanding economic opportunity for communities of colour,” he wrote.

“At IBM, we see an urgent demand for what we call “new collar” jobs, which require specialised skills but not necessarily a traditional 4-year college degree. Such jobs can still be found today in fast-growing fields from cybersecurity to cloud computing,” he added.

As Krishna underscored in the letter, IBM actually has developed a school model to address that very concern.

Called P-TECH, the model has enrolled approximately 150,000 students globally over the past decade with a focus on communities of colour.

P-TECH provides public high school students with a pathway to earning an associate’s degree, leading to employment in a specialised field, while avoiding student debt.

IBM has already begun recruiting employees through P-TECH, and Krishna urged Congress to scale up the model nationally.

As a corollary, Krishna also urged Congress to expand the Pell Grant program to include funding for specialised training in non-college or “New Collar” jobs, and to make that funding available for incarcerated persons.

Krishna concluded the letter by firmly cementing IBM’s brand reputation.

“IBM wants to help advance this nation’s pursuit of equity and justice and we stand ready to work with you to advance policies that will help unify our country and advance our national purpose,” he wrote.

The prospects for federal action are slim to none, but Krishna succeeded in staking out the high ground for IBM on the media stage.

As for other high profile tech companies, it is difficult to see technology giants drop out of this space.

So far, leading technology companies — including Apple, Google, Facebook and others — appear to be satisfied with throwing dollars at the problem rather than engineering any meaningful change into their business.

But the tides could change rapidly – after all, a few days after its employees protested its business ties with law enforcement agencies, Microsoft announced yesterday that for now it would not sell its facial recognition technology to police departments.

*Tina Casey writes for TriplePundit and other websites, with a focus on military, government and corporate sustainability, clean tech research and emerging energy technologies.

This article first appeared at triplepundit.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.