Skip to Content

For Brain-Computer Interfaces to Be Useful, They’ll Need to Be Wireless

A leading researcher says he’s working on such a model but can’t get funding.
November 8, 2017
Justin Saglio

For decades, brain-computer interfaces have been imagined as a way for people who are paralyzed or those who have lost arms to be able to do everyday tasks like brushing their hair or clicking a TV remote—just by thinking about it.

Such robotic devices exist today—so far, a handful of patients in research labs around the world have tried them, giving them a limited range of motions. But researchers are still years away from making these devices practical for use in people’s homes, says Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh.

Speaking at MIT Technology Review’s annual EmTech MIT conference in Cambridge, Massachusetts, on Tuesday, Schwartz said these interfaces will need a number of modifications in order for that to happen. He said he’s working on such a model with Draper Laboratory, based in Cambridge, but hasn’t been able to get funding to move the project along.

“This is very much on the outskirts of science,” said Schwartz, an early pioneer of these interfaces.

Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.

The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.

In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person's head.

Schwartz said he hopes paralyzed patients will someday be able to use these interfaces to control all sorts of objects beyond just a robotic arm.

“Just imagine someone using telemetry going into a smart home and being able to operate all these devices merely by thinking about them,” he said.

The big hurdle is that the science behind the technology is so complex. The interface relies on translating the “neural code”—that is, the pattern of activity of neurons in the brain—into specific commands that will translate into movements. Currently, the kinds of gestures people are able to perform with these interfaces are limited because scientists know little about all the different patterns in which the neurons fire.

For example, Schwartz and his team have been able to get monkeys, as well as a few human participants, to grasp objects using a brain-computer interface and a robotic arm. But applying force to objects, such as by pushing or pulling, is more complicated and requires a different set of neural codes that the computer algorithms need to learn. 

“We don’t have a good understanding yet of how motion and force are mixed together to allow us to interact with objects,” Schwartz said. Scientists will need to study the brain more to figure out what these signals look like. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.