Skip to Content

The Robot You Want Most Is Far from Reality

The good news: work on house-cleaning robots is underway. The bad news: it’s slow going.

What will it take to get a robot to clean your home so you don’t have to do it?

In June, Elon Musk announced that developing software to turn an off-the-shelf robot into a household cleaner is a primary goal of his OpenAI robotics institute. Such a machine would surely be popular, but making it happen will be a remarkably daunting robotics challenge. Machines will need to analyze the types of messes in a house, formulate and execute a plan for room-by-room cleaning, and handle unexpected events.

“Cleaning is different from other tasks we’ve thought about in robotics, which [have] typically involved manipulating objects, or moving them place to place,” says Maya Cakmak, an assistant professor of computer science and engineering at the University of Washington. Last year, she earned a three-year, $400,000 grant from the National Robotics Initiative, within the National Science Foundation, to research a cleaning robot. 

She points out that getting a robot to clean would require much more than simply getting it to hold a tool to some surface. “There’s the angle, how much you’re pushing and pressure you’re applying, how fast you move it, how much you move it, and even the orientation [of the tool] relative to the dirt.” A robot would also have to adjust to the curvature on a tiled countertop versus a flat floor, and properly choose the right tool for the particular kind of mess: a sponge to absorb spilled juice, a feather duster on shelves, and a stiff brush to loosen soap scum from the shower. 

Cakmak is trying to make such things possible. To train robots in her lab, she uses a technique called “programming by demonstration”: The machines learn by imitating a researcher who shows a cleaning technique for the robot’s vision system. Nearing the end of the first year of her three-year grant, Cakmak and her grad students are running a robot through many different training sessions with colored aquarium crystals as “test dirt,” using a variety of cleaning attachments, from a broom to a feather duster. She wants to get the robot to generalize the cleaning motion from the human demonstration, and also correctly identify the “state of dirt” before and after the cleaning action.

Left: A researcher demonstrates a cleaning technique for the robot. Middle and right: The robot learns on its own how to use that cleaning technique with new tools and new messes.
Left: A researcher demonstrates a cleaning technique for the robot. Middle and right: The robot learns on its own how to use that cleaning technique with new tools and new messes.
Left: A researcher demonstrates a cleaning technique for the robot. Middle and right: The robot learns on its own how to use that cleaning technique with new tools and new messes.

Ilker Yildirim, a research scientist at MIT who works on computational models of cognition and perception, says he is impressed by Cakmak’s demonstrations. But he says a robot that can plan and execute the cleaning of multiple rooms with a variety of tools will require the machine to more fully understand its environment. Creating a machine with that level of autonomous decision-making would amount to significant progress toward a genuine artificial intelligence.

Cakmak thinks household robots can’t become truly autonomous until we redesign our houses to make them more machine-friendly. For instance, long hallways might require markings that a robot can read for geolocation purposes. She also thinks that to become ubiquitous, domestic robots need to be hackable by end users, because everyone’s house is different. To this end, Cakmak is working to simplify the task of programming robots, even for people without technical backgrounds.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.