Skip to Content

Google’s Algorithms May Feed You Fake News and Opinion as Fact

As Facebook finally rolls out tools to clean up misinformation, another Internet giant must follow its lead.

The fake news baton appears to have momentarily passed from Facebook to Google.

In the wake of the U.S. presidential election, much public ire was directed at Facebook for failing to combat the rising tide of fake news. Whether or not it made any impact on the result of the election—which is incredibly difficult to ascertain—the sustained hand-wringing had an effect: the social network announced it would work with third-party fact-checkers to tag some posts as Disputed News.

This week, it appears to have come good on its promise, with some users reporting that the tool is now in use. The system runs suspicious posts past the fact-checking organizations Snopes, Politifact, ABC News, and FactCheck.org for analysis. If at least two of them contest facts within an article, users will see it tagged as “Disputed by Third Party Fact Checkers.” Then they can make up their own minds.

Problem solved? Not quite. Misinformation is still here—and now it seems that Google is the one that will happily allow its algorithms to serve it up to you.

Over the weekend, the Outline pointed out that Google’s Featured Snippet tool doesn’t behave quite as you may hope. Usually, the feature is designed to answer a question quickly when you use Google’s search engine or AI voice assistant. But the answers are mined from high-ranking Web pages, and aren’t necessarily correct.

According to the Outline, snippets have variously: claimed that President Warren Harding was once a member of the Ku Klux Klan (false); suggested that Barack Obama may have planned a coup d'etat (still waiting); and blurted out some rather unsavory views on whether or not women are evil. In each case, the answers are taken from websites that many people would not usually turn to for trustworthy information.

On the Web, Google does at least allow you to find out where the information came from and report it if you think it’s incorrect or inappropriate. And, according to the BBC, some of these specific slips have now been fixed.

But Google would be loathe to switch off its Featured Snippet tool, particularly on its voice assistant, altogether. As our own Tom Simonite has pointed out in the past, the company sees its search abilities as a big differentiating factor between its own Assistant AI and the likes of Apple’s Siri and Amazon’s Alexa. It’s simply better practiced at scouring the Web for answers. But, clearly, when the Internet is constituted with content spanning the full spectrum of veracity, it won’t always get it right.

In fact, Google itself might become home to more of the questionable content that it serves up. YouTube has always been a  source of weird and wonderful conspiracy theories about, say, the moon landing having never happened. But last week Buzzfeed pointed out that the video site is increasingly home to  what  it calls “right-leaning conspiracy and revisionist historical content,” such as suggestions that the Sandy Hook shooting didn’t result in deaths, or that Michelle Obama is, in fact, a man.

The arguments leveled at Facebook over its fake news—namely, that if you show people enough false content over and over, they will get confused and potentially start to believe some of it—can be equally leveled at YouTube. As NPR reiterated over the weekend, algorithms decide what video to suggest you watch next on YouTube, so watching just one questionable clip can easily lead you down a rabbit warren of similar content. Next thing you know, you might actually be questioning Michelle Obama’s gender.

To be sure, filtering out content is hugely problematic. There are blurred lines here between opinion and misinformation, and censorship at the cost of free speech is clearly unacceptable. Mark Zuckerberg is all too aware of that fact, having called the issues at stake “complex, both technically and philosophically.”

At this point—and we didn’t necessarily expect to be saying this—Facebook’s solution looks like a good first step. By flagging content, users can exercise their own healthy skepticism, without the social network having to tackle the thorny concept of simply yanking it from a feed altogether.

But in some cases, especially with its featured snippets, Google displays or recites content in a way that makes it seem more like objective fact than algorithmically curated and unverified third-party content. And, as a result, we may be more likely to believe it to be true. We shouldn’t—and nor should the company allow it to continue.

(Read more: Outline, Buzzfeed, Gizmodo, NPR, “Facebook Will Try to Outsource a Fix for Its Fake-News Problem,” “Google’s Assistant Is More Ambitious Than Siri and Alexa,” “Facebook’s Fake-News Ad Ban Is Not Enough”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.