Skip to Content

Fist-Sized Laser Scanner to Make Autonomous Cars Less Ugly

A chip that steers lasers could be crucial to the future of transport.
September 29, 2016

They may be smart, but self-driving cars are not sleek. Alphabet’s bubble car design sports a prominent black dome on its roof that looks like an oversized flashing light. Uber prototypes offering rides in Pittsburgh are crowned with a spinning silver cylinder about the size of a coffee can.

Those carbuncles are lidars, which map the world in 3-D using lasers, and are needed for a vehicle to sense the world in enough detail to handle a wide variety of driving conditions.

A production line being set up in Attleboro, Massachusetts, is intended to make those crucial components compact enough to fit inside the outlines of a conventional vehicle. It will start producing fist-sized lidars early next year that should also see farther and more clearly than the spinning eyesore kind.

“You’ll never know that they’re even in the vehicle,” says Louay Eldada, CEO of startup Quanergy, which invented the new design and turned to sensor company Sensata to manufacture it. Sensors can be hidden in places such as behind a car’s grill, or inside the rearview or side mirrors, says Eldada.

Quanergy plans to price its compact lidar at $250. You’ll need three to match the 360-degree view of the bulky sensors atop Alphabet and Uber vehicles, but sensors of that type cost thousands or tens of thousands of dollars.

Eldada says his smaller sensors also offer greater range—200 meters compared to 120 meters—and resolution. “I can see what you’re doing with your fingers at 100 meters,” he says.

Daimler, the German auto giant, and Delphi, a leading auto parts supplier, are experimenting with adding four to a vehicle to provide a robust all-around view of its surroundings. Both companies have invested in Quanergy, which has received more than $150 million in funding since it was founded in 2012.

Existing lidars are bulky because they use spinning mirrors to direct the laser beams they bounce off the world. Quanergy’s lidars don’t have any moving parts—instead they steer their lasers using a chip with an array of a million tiny antennas.

Conventional lidar designs have got significantly cheaper and smaller in recent years, says Sravan Puttagunta, CEO of Civil Maps, a startup working on mapping for autonomous vehicles. But solid-state designs like Quanergy’s have advantages beyond just size and cost.

For example, they can direct their laser to track or scan particular objects or areas in more detail. “This is important because you can focus on areas of interest, the way we do when driving,” says Puttagunta. It could also help autonomous cars check and update the maps they use to locate themselves, he says.

Not having moving parts should also make lidars less likely to break down, something important for production vehicles expected to last for many years. Eldada says he is in talks with many carmakers besides Daimler, but declines to name them, citing nondisclosure agreements. He expects to see his sensors on many prototype vehicles when they become available next year, and in their first commercial vehicle in 2018.

Quanergy will also produce an even smaller lidar next year, approximately the size of two stacked matchboxes, and priced at $100. It is aimed for use on drones or home security systems.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.