Skip to Content
Policy

The AI hiring industry is under scrutiny—but it’ll be hard to fix

November 7, 2019
computer
computer

The Electronic Privacy Information Center (EPIC) has asked the Federal Trade Commission to investigate HireVue, an AI tool that helps companies figure out which workers to hire. 

What’s HireVue? HireVue is one of a growing number of artificial intelligence tools that companies use to assess job applicants. The algorithm analyzes video interviews, using everything from word choice to facial movements to figure out an “employability score” that is compared against that of other applicants. More than 100 companies have already used it on over a million applicants, according to the Washington Post

What’s the problem? It’s hard to predict which workers will be successful from things like facial expressions. Worse, critics worry that the algorithm is trained on limited data and so will be more likely to mark “traditional” applicants (white, male) as more employable. As a result, applicants who deviate from the “traditional”—including people don’t speak English as a native language or who are disabled—are likely to get lower scores, experts say. Plus, it encourages applicants to game the system by interviewing in a way that they know HireVue will like. 

What’s next? AI hiring tools are not well regulated, and addressing the problem will be hard for a few reasons. 

—Most companies won’t release their data or explain how the algorithms work, so it’s very difficult to prove any bias. That’s part of the reason there have been no major lawsuits so far. The EPIC complaint, which suggests that HireVue’s promise violates the FTC’s rules against “unfair and deceptive” practices, is a start. But it’s not clear if anything will happen. The FTC has received the complaint but has not said whether it will pursue it. 

—Other attempts to prevent bias are well-meaning but limited. Earlier this year, Illinois lawmakers passed a law that requires employers to at least tell job seekers that they’ll be using these algorithms, and to get their consent. But that’s not very useful. Many people are likely to consent simply because they don’t want to lose the opportunity.

—Finally, just like AI in health or AI in the courtroom, artificial intelligence in hiring will re-create society’s biases, which is a complicated problem. Regulators will need to figure out how much responsibility companies should be expected to shoulder in avoiding the mistakes of a prejudiced society. 

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.