Skip to Content
77 Mass Ave

Wireless from water to air

New system lets airborne receivers pick up underwater sonar signals from the water’s surface.
October 23, 2018
Illustration of submarine transmitting waves to ocean surface. A jet is flying overhead.
Illustration of submarine transmitting waves to ocean surface. A jet is flying overhead.George Wylesol

MIT Media Lab researchers have taken a step toward solving a long-standing challenge with wireless communication: direct data transmission between underwater and airborne devices.

Today, underwater sensors cannot share data with those on land, because both use wireless signals that work only in their respective mediums. Radio signals traveling through air die rapidly in water; acoustic signals, or sonar, sent by underwater devices mostly reflect off the surface without breaking through. Buoys have been designed to pick up sonar waves, process the data, and shoot radio signals to airborne receivers. But they can drift away and get lost, and many are needed to cover large areas.

Assistant professor Fadel Adib and his graduate student Francesco Tonolini are tackling this issue with a direct underwater-­to-air transmission system, called “translational acoustic-RF communication” (TARF). “Trying to cross the air-water boundary with wireless signals has been an obstacle. Our idea is to transform the obstacle itself into a medium through which to communicate,” Adib says.

In the system, a standard acoustic speaker transmits sonar signals to the water’s surface, which travel as pressure waves of different frequencies corresponding to different data bits. To send a 0, the transmitter can send, say, a wave traveling at 100 hertz; for a 1, it can transmit a 200-Hz wave. When the signal hits the surface, it causes tiny ripples in the water, only a few micrometers high, corresponding to those frequencies.

Positioned in the air above the transmitter is an extremely high-frequency radar that processes signals in the millimeter-­wave spectrum of wireless transmission, between 30 and 300 gigahertz. The radar—which looks like a pair of cones—sends a radio signal that reflects off the vibrating surface and rebounds back with a slightly modulated angle corresponding to the sent data. A vibration on the water surface representing a 0 bit, for instance, will cause the reflected signal’s angle to vibrate at 100 Hz. That bit is then decoded.

The researchers tested TARF in swimming pools at MIT, with swimmers disturbing the surrounding water. The transmitter was submerged up to 3.5 meters below the surface, and the receiver was positioned up to 30 centimeters above. TARF accurately decoded messages—such as “Hello! from underwater”—at hundreds of bits per second, similar to standard data rates for underwater communications. A paper describing the system and the results was presented at this year’s Sigcomm conference.

The system could open new capabilities for water-air communication, Adib says. Military submarines could communicate with airplanes without surfacing and compromising their locations. Underwater drones that monitor marine life wouldn’t need to constantly resurface from deep dives to send data to researchers, which is inefficient and costly. The system could also help find planes lost underwater. “Acoustic transmitting beacons could be implemented in, say, a plane’s black box,” Adib says. Then they could send signals periodically for search planes to decode.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.