Skip to Content

Tech Giants Are Joining Forces to Battle Terrorists’ Content

A centralized database will allow the likes of Facebook, Twitter, and YouTube to block video posted by violent extremists across multiple sites.
December 6, 2016

Facebook, Twitter, Microsoft, and YouTube are joining forces to remove extremist content from their websites more efficiently.

Savvy social media strategies have helped ISIS and other terrorist organizations disperse video and images online, helping to recruit new members and inspire attacks. As we wrote last year, ISIS in particular has been “using 21st-century technology to promote a medieval ideology involving mass killings, torture, rape, enslavement, and destruction of antiquities.”

Now the companies whose websites are used to promulgate such content are going to work together to try and block it. Using a technique known as hashing—which can ascribe a unique number to a media file—they will share records of content that each has banned from its site.

A report earlier this year claimed that Google and Facebook had started experimenting with the use of hashing to block extremist content. But the new shared database will mean that items banned by one organization will also be removed from the websites of the others. The new database will go into action in early 2017.

The blacklisted content won’t be automatically detected by algorithms. Instead, humans will perform that part of the task in order to ensure that, say, journalistic reporting of terrorism isn’t wrongly taken down.

Hany Farid, a computer scientist from Dartmouth College, has warned the Guardian that such a task should itself be carefully monitored. “You want people who have expertise in extremist content making sure it’s up to date,” he explained. “Otherwise you are relying solely on the individual technology companies to do that.”

Still, the news is welcome. Historically, tech companies have resisted calls to police such content, but increasing political pressure has clearly forced them to reevaluate their approaches.

(Read more: The New York Times, The Guardian, “Fighting ISIS Online,” “Facebook and Google May Be Fighting Terrorist Videos with Algorithms,” “What Role Should Silicon Valley Play in Fighting Terrorism?”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.