Skip to Content
Uncategorized

Laser Lunar Landing System

NASA is developing optical sensors for safer touchdowns.
November 12, 2008

Spacecraft landing on the Moon and Mars have yet to be able to choose their landing sites: they touch down wherever their trajectories take them. But the most scientifically rich terrain also tends to be the most hazardous. Now NASA is developing an optical sensor that will, for the first time, allow spacecraft to identify safe landing locations and navigate toward them.

Seeing triple: NASA is developing a new optical-sensor system that sends out three continuous beams of laser light and measures the properties of the return beams to determine a spacecraft’s velocity and position relative to the surface of a celestial body. The top three lenses transmit and receive the beams, allowing for more-accurate and safer lunar landings.

The technology is a light detection and ranging (LIDAR) system that sends three continuous beams of laser light to the surface. It measures the properties of the light that bounces back to determine the velocity and position of the spacecraft relative to the surface, in three dimensions. “It is much more accurate than any other available, similar technology in terms of determining your coordinates relative to the surface, and it is going to be revolutionary in that area,” say Bob Reisse, the project manager for the system at NASA’s Langley Research Center, in Hampton, VA.

Traditional LIDAR uses short pulses of laser light and measures the time it takes them to return to the emitter. Instead, the new system measures the Doppler shift–the change in frequency and wavelength–of the return beam. “The beam has to be more or less continuous for long enough to make the measurement,” says Reisse. “We also need [the beam] to be stable, and continuous lasers are much more stable than continuous bursts.” In addition, whereas traditional LIDAR uses one beam, the new system uses three. “Essentially, it is an entirely different technology,” Reisse says. The combination of the Doppler shift measurement and the added beams allows the spacecraft to calculate its velocity down to the order of centimeters per second, and its position to the order of centimeters, at a range of one to two kilometers from the planet’s surface, says Reisse.

Testing, testing: NASA tested its new LIDAR system onboard this helicopter at Dryden Flight Research Center, in the Mojave Desert, north of Los Angeles.

“This will make the system so much faster,” says Donald Figer, a professor at the Rochester Institute of Technology and the director of the Rochester Imaging Detector Laboratory (RIDL). “They will get more data, quicker, and in higher resolution.” Figer’s team is currently working with researchers at MIT’s Lincoln Laboratory to build a new LIDAR system for mapping the planets.

The NASA LIDAR system is part of NASA’s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project, which is developing technologies that will allow spacecraft to land safely on the Moon. Until now, the only precaution against hazardous landings has been to put spacecraft inside big balloons and drop them onto the surface, where they just bounce around until they settle, says Figer. “But if you have astronauts onboard, you might not want to land that way.” Also, future spacecraft will need to land near specific resources that may be located amid hazardous terrain–areas where rocks, boulders, and craters can significantly damage robotics. To determine the safety of a site, NASA will pair the new optical sensor with a flash LIDAR sensor that uses commercial technology. NASA’s Jet Propulsion Laboratory, in Pasadena, CA, will use data from the flash LIDAR to create an image of the terrain. “You want to land in a good place, and this technology will help you do that,” says Figer.

The technology was recently tested in a series of flights at NASA’s Dryden Flight Research Center, in the Mojave Desert, north of Los Angeles, and according to Reisse, it performed better than the advanced GPS receiver onboard. But the researchers will continue to test the technology until it is safe enough to guide manned lunar landers.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.