Skip to Content
Artificial intelligence

An AI app that turns you into a movie star has risked the privacy of millions

September 4, 2019
An image of the Chinese AI app ZAO
An image of the Chinese AI app ZAODa Qing/AP

ZAO, a viral Chinese app that uses AI to face-swap users and famous actors, is now embroiled in a major privacy controversy.

The news: On Friday, a new app released by Momo, a social-media developer, instantly went viral on Chinese social media. It allows users to upload a single portrait and, within seconds, see their face superimposed onto actors in iconic movie scenes. By Sunday, it had become the most downloaded free entertainment app in China’s Apple Store.

AI fakery: It’s the latest—and perhaps most impressive—application of generative adversarial networks, or GANs, the AI algorithms behind deepfakes. While GANs have been used for face-editing and face-swapping before (increasingly so in Hollywood films), ZAO’s use of a single photo, coupled with the speed and seamlessness of its swap, demonstrates how far the state of the art in media fakery has advanced.

The controversy: Within hours of its release, ZAO began to spark privacy concerns, specifically over a clause in the user agreement that gave developers the right to use all uploaded photos for free in perpetuity. It also allowed them to transfer that right to any third party without user permission. Legal experts in China said that wasn’t legal, and by Saturday, the app’s developer had caved under pressure and removed the clause. WeChat, China’s top social-networking app, also banned any sharing of footage or photos from ZAO.

Déjà vu: The episode replayed a similar controversy over FaceApp, a photo-editing app that went viral in July. The app also used GANs to retouch people’s portraits and had amassed over 150 million photos of faces since its launch. ZAO received a much quicker and sharper backlash, but it too had likely already been used by millions of users by the time it revised its policy. On one hand, the frequency of such incidents shows how easily a user’s personal data can now be co-opted and repurposed beyond his or her control. On the other, it shows that people have also become more sensitive to privacy and are less willing to give it up without a fight.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.