If chatbots are going to get better, they might need to offend you
AIs have gotten better at holding a conversation, but tech firms are wary of rolling them out for fear of PR nightmares.
Better bots: The New York Times says recent AI advances helped Microsoft and Facebook build a “new breed” of chatbots that carefully choose how to converse. Microsoft, for instance, built one that picks the most human-sounding sentence from a bunch of contenders to create “precise and familiar” responses.
But: Like Microsoft’s disastrously racist Tay bot, they still go wrong. Facebook says 1 in 1,000 of its chatbots’ utterances may be racist, aggressive, or generally unwelcome. That’s almost inevitable when they’re trained on limited data, because there’s bound to be unsavory text in online conversations that are used as training sets.
Why it matters: If the bots are going to keep improving, they must go in front of real users. But tech firms fear PR disasters if the software says the wrong thing. We may need to be more accepting of mistakes if we want the bots to get better.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.