For the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds specifically to music, but not to speech or other environmental sounds.
Whether such a population of neurons exists has been the subject of widespread speculation, says Josh McDermott, an assistant professor of neuroscience at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” he says.
Using functional magnetic resonance imaging (fMRI), McDermott and colleagues scanned the brains of 10 human subjects listening to 165 sounds, including different types of speech and music as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.
Mapping the auditory system has proved difficult because fMRI, which measures blood flow as an index of neural activity, lacks fine spatial resolution. In fMRI, “voxels”—the smallest unit of measurement—can reflect the response of millions of neurons.
To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. This revealed six populations of neurons—the music-selective population, a set of neurons that respond selectively to speech, and four sets that respond to other acoustic properties such as pitch and frequency.
Those four acoustically responsive populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical sound processing. The speech- and music-selective neural populations lie beyond this primary region.
“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” says postdoc Sam Norman-Haignere, PhD ’15, lead author of the study, published in Neuron.
Nancy Kanwisher ’80, PhD ’86, a professor of cognitive neuroscience and an author of the study, says that even though music-selective responses exist in the brain, that doesn’t mean they reflect an innate brain system. “An important question for the future will be how this system arises in development: how early it is found in infancy or childhood, and how dependent it is on experience,” she says.
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
The problem with plug-in hybrids? Their drivers.
Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
How scientists traced a mysterious covid case back to six toilets
When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.