Skip to Content
Artificial intelligence

A US government study confirms most face recognition systems are racist

December 20, 2019
A U.S. Customs and Border Protection officer helps a passenger navigate a facial recognition kiosk at the airport.
A U.S. Customs and Border Protection officer helps a passenger navigate a facial recognition kiosk at the airport.David J. Phillip/AP

Almost 200 face recognition algorithms—a majority in the industry—had worse performance on nonwhite faces, according to a landmark study.

What they tested: The US National Institute of Standards and Technology (NIST) tested every algorithm on two of the most common tasks for face recognition. The first, known as “one-to-one” matching, involves matching a photo of someone to another photo of the same person in a database. This is used to unlock smartphones or check passports, for example. The second, known as “one-to-many” searching, involves determining whether a photo of someone has any match in a database. This is often used by police departments to identify suspects in an investigation.

The agency studied four face data sets currently used in US government applications: mug shots of people living in the US; application photos from people applying for immigration benefits; application photos from people applying for visas; and photos of people as they crossed the border into the US. In total, the data sets included 18.27 million images of 8.49 million people.

What they found: NIST shared some high-level results from the study. The main ones:

1. For one-to-one matching, most systems had a higher rate of false positive matches for Asian and African-American faces over Caucasian faces, sometimes by a factor of 10 or even 100. In other words, they were more likely to find a match when there wasn’t one.

2. This changed for face recognition algorithms developed in Asian countries, which produced very little difference in false positives between Asian and Caucasian faces.

3. Algorithms developed in the US were all consistently bad at matching Asian, African-American, and Native American faces. Native Americans suffered the highest false positive rates.

4. One-to-many matching, systems had the worst false positive rates for African-American women, which puts this population at the highest risk for being falsely accused of a crime.

Why this matters: The use of face recognition systems is growing rapidly in law enforcement, border control, and other applications throughout society. While several academic studies have previously shown popular commercial systems to be biased on race and gender, NIST’s study is the most comprehensive evaluation to date and confirms these earlier results. The findings call into question whether these systems should continue to be so widely used.

Next steps: It’s now up to policymakers to figure out the best way to regulate these technologies. NIST also urges face recognition developers to conduct more research into how these biases could be mitigated.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.