Skip to Content
Humans and technology

Here are five ways deepfake-busting technology could backfire

December 16, 2019
hitler deepfake
hitler deepfakeMS. TECH

Deepfakes have become a symbol for the end of truth and, to some, a potential tool to swing elections. (Never mind that most deepfakes are still fake porn.) Everyone from the US government to tech giants to startups is trying to develop deepfake-busting technology. But a new report out today from Witness, a nonprofit that studies synthetic media, points out how these tools could go wrong. 

The techniques: Manipulated video is not a new issue, and there are plenty of social problems that even the best deepfake detector can’t fix. (For example, knowing that a video has been edited doesn’t automatically answer the question of whether it should be taken down. What if it’s satire?) That hasn’t prevented companies like Amber Video, Truepic, and eWitness from developing “verified-at-capture” or “controlled-capture” technologies. These use a variety of techniques to sign, geotag, and time-stamp an image or video when it’s created. In theory, this makes it easier to tell if the media has been tampered with. 

What’s the problem? The Witness report lays out 14 different ways that these technologies could actually end up being harmful. Some of the key ones: 

—The tools being built could be used to surveil people
—Technical restraints could stop these tools from working in places where they’re most needed (and those using old hardware could be left behind)
—Jailbroken devices won’t be able to capture verifiable material
—Companies could delete the data or not let individuals control it 
—Requiring more verification for media in court could make the legal process longer and more expensive

So what can be done?  There’s no easy solution to these problems, says Witness program director Sam Gregory. The companies building these technologies must address these questions and think about the people who are most likely to be harmed, he adds. It is also possible to build synthetic media tools themselves in a more ethical way. Technology expert Aviv Ovadya, for instance, has ideas for how to make responsible deepfake tools. Companies can do their best to vet which clients are allowed to use their tools and explicitly penalize those who violate their norms. Synthetic media of all kinds are going to become more common. It’ll take a lot of different tactics to keep us all safe.

Deep Dive

Humans and technology

Unlocking the power of sustainability

A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.

Building a more reliable supply chain

Rapidly advancing technologies are building the modern supply chain, making transparent, collaborative, and data-driven systems a reality.

Building a data-driven health-care ecosystem

Harnessing data to improve the equity, affordability, and quality of the health care system.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.