Skip to Content
77 Mass Ave

Mind-Reading Robots

CSAIL system lets humans correct robots’ mistakes by thinking.

Getting robots to do what we want often requires giving them explicit commands for very specific tasks. But new research suggests that we could one day control them in much more intuitive ways.

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University has developed a feedback system that lets people use their thoughts to correct robots instantly when the machines make mistakes. Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect—in the space of 10 to 30 milliseconds—if a person notices an error as a robot performs an object-sorting task.

“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,” says CSAIL director Daniela Rus, a senior author on a paper about the research being presented at the IEEE International Conference on Robotics and Automation in May. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

Past work in robotics controlled by EEG has required training humans to think in a prescribed way that computers can recognize. For example, an operator might have to look at one of two bright light displays, each of which corresponds to a different task for the robot to execute. But the training process and the act of modulating one’s thoughts can be taxing, particularly for people who supervise tasks in navigation or construction that require intense concentration.

Rus’s team wanted to make the experience more natural. To do that, they focused on brain signals called error-related potentials (ErrPs), which are generated whenever our brains notice a mistake. As the robot—in this case, a humanoid robot named Baxter from Rethink Robotics, the company led by former CSAIL director Rodney Brooks—indicates which choice it plans to make in a binary activity, the system uses ErrPs to determine whether the human agrees with the decision.

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way—the machine adapts to you and not the other way around.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.