Skip to Content

Wearable Self-Tracking Tool Listens for Yawns, Coughs, and Munches

A new wearable sensor listens for sounds that betray your activity and mood.
June 19, 2014

Speech recognition has gotten sophisticated, but spoken words aren’t the only revealing noises people make. We also cough, laugh, grunt, grind teeth, breathe hard, and make other sounds that can provide clues to mood and health.

Good listener: This prototype piezoelectric detector attaches to a person’s head, behind the ear, and picks up body noises like chewing and laughing.

Now researchers at Cornell have built a system designed to detect body noises other than speech. The system consists of a microphone that attaches behind the user’s ear, and someday could be built into the frame of a device like Google Glass. By picking up sound waves transmitted through the skull, it can detect subtle clues about the activity or emotional state of the person wearing it—when he or she is eating, for example, or has a cold—and could make devices that track fitness or health much more accurate.

“We see ‘quantified self’ and health tracking taking off, but one unsolved problem is how to track food consumption in an automated way,” says Tanzeem Choudhury, who led the research. “This can reliably detect the onset of eating and how frequently are you eating.”

If used in enough smartphones, Choudhury’s technology might measure the health of a city. “This could be a bridge between tracking pollution and coughing and other respiratory sounds to get a better measure of how pollution is affecting the population,” she says.

Such technology also could be combined with other methods of ambient sensing in smartphones. Motorola’s latest handset, the Moto X, includes a chip that constantly listens for certain keywords (see “The Era of Ubiquitous Listening Dawns”) to determine what the phone’s owner is doing.

Rana el Kaliouby, cofounder of Affectiva, a Waltham, Massachusetts-based company that makes software that can read people’s faces to detect their emotions, says the Cornell technology could help with both mood-sensing and health. “I like their focus on nonspeech body sounds. We know from our work that these are very important and telling,” she says. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.