Skip to Content
Silicon Valley

Facebook’s leaked moderation rules show why Big Tech can’t police hate speech

December 28, 2018

Society asked Big Tech to shut down hate speech online. We got exactly what we asked for.

The news: The New York Times’s Max Fisher published extracts from more than 1,400 pages of internal Facebook documents, containing rules for the company’s global army of more than 7,500 content moderators. (Motherboard had previously published some of the same material.)

What’s inside? A sprawling hodgepodge of guidelines, restrictions, and classifications. The rules on hate speech alone “run to 200 jargon-filled, head-spinning pages.” They include details on how to interpret emoji (use of ? can be both “bullying” and “praising,” apparently) and lists of people or political parties to monitor for possible hate speech. The documents show Facebook to be “a far more powerful arbiter of global speech” than it has admitted, Fisher writes.

The problem: The guidelines are not only byzantine; some are out of date or contain errors. They also vary widely depending on how much pressure the company is under: “Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinize the social network, but only one in neighboring Austria.” Moderators, most of whom work for outsourcing companies and get minimal training, are expected to make complex judgments in a matter of seconds, processing a thousand posts a day, with rules that change frequently in response to political events, and often using Google Translate.

The takeaway: This strips away any remaining pretense that Facebook is just a neutral publishing platform. Political judgments permeate every page of these guidelines.

But what did you expect? As Facebook’s former chief security officer, Alex Stamos, told me in October, we’ve demanded that tech platforms police hate speech, and that only gives them more power. “That’s a dangerous path,” Stamos warned. “Five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.