Skip to Content

EmTech: Qualcomm Working to Build Artificial Intelligence Into Smartphones

Future smartphones could have specialized hardware that uses simulated neurons to do things like recognize objects or faces.
September 30, 2014

Future smartphones will be able to understand what you’re taking photos of and recognize faces, says mobile chip maker Qualcomm. Researchers at the company are working to make a powerful new approach to artificial intelligence known as deep learning a standard feature of mobile devices.

Charles Bergan
Charles Bergan

Smartphone camera apps often have “scene” modes to get the best shots of landscapes, sports, or sunsets. Qualcomm has created a camera app able to identify different types of scenes on its own, based on their visual characteristics. That could lead to phones that can choose their own settings without having to send or receive data over the Internet.

Charles Bergan, who leads software research at Qualcomm, demonstrated that software in a sponsored talk at MIT Technology Review’s EmTech conference last week in Cambridge, Massachusetts. He said that it should be possible to use the same approach to create software that could decide the best moment to take a photo. “Maybe it will detect that it’s a soccer game and look for that moment when the ball is just lifting off,” he said.

Bergan also demonstrated a facial-recognition app. It recognized his face despite being trained to recognize his features using only a short, shaky, and poorly lit video of his face.

Those demonstrations were based on deep learning, a technique that trains software by processing data through networks of simulated neurons (see “10 Breakthrough Technologies 2013: Deep Learning”). In the case of the scene-classifying app, for example, the simulated neurons were exposed to thousands of photos of different types of scenes.

Bergan said that one reason Qualcomm is working on enabling phones to run deep learning software is that major mobile device manufacturers requested ways to make their devices smarter about images. When exactly the features might make it into phones is unclear.

Qualcomm has previously experimented with chips that are considered “neuromorphic,” because their circuits are arranged in neuron-like arrangements (see “Qualcomm to Build Neuro-Inspired Chips”). However, designs like that are still very much research projects (see “10 Breakthrough Technologies 2014: Neuromorphic Chips”). Bergan says that adding small “accelerators” for deep learning to existing chip designs would be a more practical approach.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.