Facial Recognition is Flawed, Let’s Face It

Several studies have found this to be the case, and it's time that changes

5
IBM Privacy UC Today
Data & AnalyticsReviews

Published: June 22, 2020

Moshe Beauford

Moshe Beauford

Companies across the globe are scrambling to clean up, in some cases, racist brand names, and technologies that are biased toward Black and Brown people as well as women. You may ask why I’ve capitalized Black, I am doing so in solidarity with the National Association of African American Journalists and dozens of other publications and reporters who have chosen to recognize Black as a proper noun. The Quaker Oats Company, which owns ‘Aunt Jemima,’ most recently said it would change the name of its 130-year-old brand because it perpetuated racist stereotypes. Tech giant IBM said the company would no longer lend its facial recognition software for mass surveillance and racial profiling. This comes just weeks after #BlackLivesMatter protests across the globe continue to grow in response to the killing of George Floyd.

Rep. Jimmy Gomez
Rep. Jimmy Gomez

Floyd was a 46-year-old Minnesota resident killed in police custody on May 25, 2020, after officer Derek Chauvin placed his knee against Floyd’s neck for over eight minutes. An autopsy later revealed Floyd died of asphyxiation due to neck and back compression. Chauvin, who is white, is relieved of his duties as a and charged with third-degree murder along with manslaughter. Protests have sparked a sense of hope, as we’ve begun to see action as a result of nearly 400 years of systemic oppression of a people who largely built the United States. Kennedy Mitchum, a 22-year-old recent Drake University law school graduate convinced Merriam-Webster to amend the definition of racism and other related words to include more context along with a section on institutional racism.

In a letter to U.S. Congressional leaders, IBM wrote AI systems used in law enforcement should be tested for bias, a bold move for a company that produces said technologies. Two days later, Amazon said it would no longer sell its ‘Rekognition’ facial recognition platform to police for one year and Democratic Rep. Jimmy Gomez, California, responded to Amazon CEO, Jeff Bezos, via a written letter obtained by CNBC, writing:

“After two years of formal congressional inquiries — including bicameral letters, House Oversight Committee hearings, and in-person meetings — Amazon has yet to adequately address questions about the dangers its facial recognition technology can pose to privacy and civil rights, the accuracy of the technology, and its disproportionate impact on communities of color”

He further acknowledged that corporations are swift to share expressions of support for the Black Lives Matter movement, expressly following public outrage over the killing of George Floyd by the police. Nonetheless, IBM Chief Executive, Arvind Krishna, said the “Fight against racism is as urgent as ever,” He then outlined three areas where IBM said it would work with Congress: police reform, the responsible use of technology, widening skills as well as educational opportunities, adding in the letter:

“IBM firmly opposes and will not condone the uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms”

What he said next, touched a nerve: “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Whenever issues of race and gender equality come into play, there’s always dialogue and never any real action. Why is that?

Krishna quickly shifted the conversation to the wider use of body cameras on police officers and using data to analyze how to handle situations with less force. Something key to note, however, data analytics are even more essential to what IBM’s doing, way more than its facial recognition products. Via data analytics, IBM’s develop technology for predictive policing, an offering that’s too criticized for the potential bias against Black and Brown individuals. This is why what IBM’s done is nothing more than optics, another mega-brand riding the wave of ‘Let’s stop racism, but first lets profit from and facilitate it for years.’

A 2019 National Institute of Standards of Technology study on facial recognition found that algorithms erroneously classified African-American and Asian faces 10-to-100 times more than that of Caucasian faces. Another 2019 study conducted by the Massachusetts Institute of Technology found that none of the facial recognition tools from Microsoft, Amazon, or IBM maintained 100 percent accuracy when identifying men and women of color. I reached out to NIST for a statement and was told by Biometric Algorithm Evaluator, and Biometric Performance Testing Performance Specialist, Patrick Grother:

“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,”

He added, while NIST does not explore what might cause these differentials, these data will be valuable to policymakers, developers, and end-users in thinking about the limitations and appropriate use of these algorithms. It seems that even IBM is at least aware of the issue, and the fix could be as simple as involving Black, Brown, and women’s faces in the process of training artificial intelligence models.

Patrick Grother
Patrick Grother

Facial recognition tools come with a lot of risk for everyday citizens and can strip them of privacy because of mass surveillance. In the COVID-19 era, we’ve seen how these technologies are already used in countries like China where they’ve used the technology to track the movement of suspected and confirmed cases of Coronavirus. There is also the possibility that facial recognition could violate due process, and Amnesty International’s since called for a ban on facial recognition technology. Such a move could be a wise considering technology developed by U.S.-based Clearview AI, which one European privacy group argues is illegal. The software lets law enforcement in over 26 countries match the photos of people’s faces to a database that the company boasts has more than three billion images sourced from social media platforms, etc.

The European Data Protection Board said, “The use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime.” Hoan Ton-That, Clearview AI CEO, said: “Clearview’s image-search technology is not currently available in the European Union. Nevertheless, we process data-access and data-deletion requests from EU residents and search the public internet like any other search engine.” According to The New York Times, more than 600 law enforcement agencies in the U.S. use Clearview AI technology, a questionable practice, at best.

 

Artificial IntelligenceBig DataSecurity and Compliance
Featured

Share This Post