Skip to Content
Artificial intelligence

Hidden Obstacles for Google’s Self-Driving Cars

Impressive progress hides major limitations of Google’s quest for automated driving.
August 28, 2014

Would you buy a self-driving car that couldn’t drive itself in 99 percent of the country? Or that knew nearly nothing about parking, couldn’t be taken out in snow or heavy rain, and would drive straight over a gaping pothole?

If your answer is yes, then check out the Google Self-Driving Car, model year 2014.

Of course, Google isn’t yet selling its now-famous robotic vehicle and has said that its technology will be thoroughly tested before it ever does. But the car clearly isn’t ready yet, as evidenced by the list of things it can’t currently do—volunteered by Chris Urmson, director of the Google car team.

Watch out: Google’s self-driving car can “see” moving objects like other cars in real time. But only a pre-made map lets it know about the presence of certain stationary objects, like traffic lights.

Google’s cars have safely driven more than 700,000 miles. As a result, “the public seems to think that all of the technology issues are solved,” says Steven Shladover, a researcher at the University of California, Berkeley’s Institute of Transportation Studies. “But that is simply not the case.”

No one knows that better than Urmson. But he says he is optimistic about tackling outstanding challenges and that it’s “going to happen more quickly than many people think.”

Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.

Google’s cars are better at handling some mapping omissions than others. If a new stop light appeared overnight, for example, the car wouldn’t know to obey it. However the car would slow down or stop if its on-board sensors detected any traffic or obstacles in its path.

Google’s cars can detect and respond to stop signs that aren’t on its map, a feature that was introduced to deal with temporary signs used at construction sites. But in a complex situation like at an unmapped four-way stop the car might fall back to slow, extra cautious driving to avoid making a mistake. Google says that its cars can identify almost all unmapped stop signs, and would remain safe if they miss a sign because the vehicles are always looking out for traffic, pedestrians and other obstacles.

Alberto Broggi, a professor studying autonomous driving at Italy’s Università di Parma, says he worries about how a map-dependent system like Google’s will respond if a route has seen changes.

Michael Wagner, a Carnegie Mellon robotics researcher studying the transition to autonomous driving, says it is important for Google to be open about what its cars can and cannot do. “This is a very early-stage technology, which makes asking these kinds of questions all the more justified.”

Maps have so far been prepared for only a few thousand miles of roadway, but achieving Google’s vision will require maintaining a constantly updating map of the nation’s millions of miles of roads and driveways. Urmson says Google’s researchers “don’t see any particular roadblocks” to accomplishing that. When a Google car sees a new permanent structure such as a light pole or sign that it wasn’t expecting it sends an alert and some data to a team at Google in charge of maintaining the map.

In May, Google announced that all its future cars would be totally driver-free, without even a steering wheel. It cited the difficulties in assuring that a standby human driver would always be ready to take over. The company says it will initially test the new cars with the added controls now required by states that allow testing. But winning approval to test, much less market, a totally robotic car “would be a tremendous leap,” says David Fierro, spokesman for the DMV in Nevada, where Google now runs tests.

Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

Urmson says these sorts of questions might be unresolved simply because engineers haven’t yet gotten to them.

But researchers say the unsolved problems will become increasingly difficult. For example, John Leonard, an MIT expert on autonomous driving, says he wonders about scenarios that may be beyond the capabilities of current sensors, such as making a left turn into a high-speed stream of oncoming traffic.

Challenges notwithstanding, Urmson wants his cars to be ready by the time his 11-year-old son is 16, the legal driving age in California. “It’s my personal deadline,” he says.

This story has been updated by editors after clarifying information from Google.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.