Skip to Content

A Video-Game Algorithm to Solve Online Abuse

How a team of psychologists and scientists at Riot Games is unlocking the secret to eliminating abuse within an online video game.
September 14, 2015

Like many online spaces, League of Legends, the most widely played online video game in the world today, is a breeding ground for abusive language and behavior. Fostered by anonymity and amplified within the heated crucible of a competitive team sport, this conduct has been such a problem for its maker, Riot Games, that the company now employs a dedicated team of scientists and designers to find ways to improve interactions between the game’s players.

During the past few years the team has experimented with a raft of systems and techniques, backed by machine learning, that are designed to monitor communication between players, punish negative behavior, and reward positive behavior. The results have been startling, says Jeffrey Lin, lead designer of social systems at Riot Games. The software has monitored several million cases of suspected abusive behavior. Ninety-two percent of players who have been caught using abusive language against others have not reoffended. Lin, who is a cognitive neuroscientist, believes that the team’s techniques can be applied outside the video-game context. He thinks Riot may have created something of an antidote for online toxicity, regardless of where it occurs.

The project began several years ago when the team introduced a governance system dubbed, in keeping with the game’s fantasy theme, the Tribunal. The game would identify potential cases of abusive language and create a “case file” of the interaction. These files were then presented to the game’s community of players (an estimated 67 million unique users), who were invited to review the in-game chat logs and vote on whether they considered the behavior acceptable. Overall, the system was highly accurate, Lin says. Indeed, 98 percent of the community’s verdicts matched those of the internal team at Riot.

Several million cases were handled in this somewhat labor-intensive manner. Soon Lin and the team began to see patterns in the language toxic players used. To help optimize the process, they decided to apply machine learning techniques to the data. “It turned out to be extremely successful in segmenting negative and positive language across the 15 official languages that League supports,” says Lin.

The new version of the system, now policed by technology instead of by other players, made it more efficient to provide feedback and impose consequences for toxic behavior in the game. It can now deliver feedback to players within five minutes, where previously it could take up to a week.

Lin says the system dramatically improved what the company calls “reform rates.” A player who has previously received a penalty, such as a suspension from ranked matches, is considered reformed if he or she avoids subsequent penalties for a period of time. “When we added better feedback to the punishments and included evidence such as chat logs for the punishment, reform rates jumped from 50 percent to 65 percent,” he says. “But when the machine learning system began delivering much faster feedback with the evidence, reform rates spiked to an all-time high of 92 percent.”

One challenge the system faces is discerning context. As in any team sport, players often build camaraderie through joshing or sarcasm that, in another context, could be deemed unkind or aggressive. A machine usually fails to catch the sarcasm. In fact, that is perhaps the most significant barrier to fighting online abuse with machine learning. “It is pretty fair to say that AIs that understand language perform best when minimal contextual information is necessary to compute the correct response,” explains Chris Dyer, an assistant professor at Carnegie Mellon University who works on natural language processing. “Problems that require integrating a lot of information from the context in which an utterance is made are much harder to solve, and sarcasm is extremely context dependent.”

Currently, Lin and his team try to solve the problem with additional checks and balances. Even when the system identifies a player as having displayed toxic behavior, other systems are checked to reinforce or veto the verdict. For example, it will attempt to validate every single report a player files to determine his or her historical “report accuracy.” “Because multiple systems work in conjunction to deliver consequences to players, we’re currently seeing a healthy 1 in 5,000 false-positive rate,” says Lin.

To truly curb abuse, Riot designed punishments and disincentives to persuade players to modify their behavior. For example, it may limit chat resources for players who behave abusively, or require players to complete unranked games without incident before being able to play top-ranked games. The company also rewards respectful players with positive reinforcement.

Lin firmly believes that the lessons he and his team have learned from their work have broader significance. “One of the crucial insights from the research is that toxic behavior doesn’t necessarily come from terrible people; it comes from regular people having a bad day,” says Justin Reich, a research scientist from Harvard’s Berkman Center, who has been studying Riot’s work. “That means that our strategies to address toxic behavior online can’t be targeted just at hardened trolls; they need to account for our collective human tendency to allow the worst of ourselves to emerge under the anonymity of the Internet.”

Nevertheless, Reich believes Lin’s work demonstrates that toxic behavior is not a fixture of the Web, but a problem that can be addressed through a combination of engineering, experimentation, and community engagement. “The challenges we’re facing in League of Legends can be seen on any online game, platform, community, or forum, which is why we believe we’re at a pivotal point in the time line of online communities and societies,” says Lin. “Because of this, we’ve been very open in sharing our data and best practices with the wider industry and hope that other studios and companies take a look at these results and realize that online toxicity isn’t an impossible problem after all.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.