Skip to Content
77 Mass Ave

Look Here

Eye tracking could soon come to cell phones.
August 23, 2016

Eye-tracking technology—which determines where in a visual scene people are directing their gaze—is widely used in psychology and marketing research but requires pricey hardware that has kept it from finding consumer applications.

New software from MIT and the University of Georgia, however, promises to turn any smartphone into an eye-­tracking device.

“The field is kind of stuck in this chicken-and-egg loop,” says Aditya Khosla, the electrical engineering and computer science grad student who led the software’s development. “Since few people have the external devices, there’s no big incentive to develop applications for them. Since there are no applications, there’s no incentive for people to buy the devices.”

Illustration by Christine Daniloff | MIT

The researchers built their eye tracker using machine learning, a technique in which computers learn to perform tasks by identifying patterns in large sets of training examples.

To collect their training data, they developed a simple mobile application that flashes a small dot somewhere on a device’s screen, attracting the user’s attention, then briefly replaces it with either an “R” or an “L.” Correctly tapping either the right or left side of the screen ensures that the user has shifted his or her gaze to the intended location. The device camera continuously captures images of the user’s face.

Initial experiments, using training data drawn from 800 mobile-device users, got the system’s margin of error down to 1.5 centimeters; data on another 700 people reduced it to about a centimeter. Khosla estimates that training examples from 10,000 users will lower it to a half-centimeter, which should make the system commercially viable.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.