UCI researchers have successfully identified and mapped a new acoustic dimension in the human auditory cortex that sheds light on the link between how sounds are analyzed and how higher order processes such as language develop. Published in the November 27 issue of the Proceedings of the National Academy of Sciences, the findings have wide implications for future hearing research and studies on specialized therapies for those with hearing and language deficits.
“Measuring a cortical field map of a sensory space allows us to pinpoint where and how a sense like vision or hearing is processed in the human brain,” says Alyssa A. Brewer, M.D., Ph.D., UCI cognitive neuroscientist and senior co-author of the study. She is a vision scientist with extensive experience using functional MRI techniques to measure visual field maps in humans and non-human primates.
“The portions of the brain devoted to visual processing are littered with visual field maps which define the boundaries for where different vision processes occur and which are localized by measuring two dimensions of visual space,” she says. The representations of these two dimensions as perpendicular gradients in the brain underlie the fundamental organization of visual cortex.
Until now, hearing researchers interested in similarly measuring the fundamental organization of the parts of the brain responsible for analyzing sound have had to rely on imprecise measurements of only one acoustic dimension – tones of sound, or tonotopy. Without a second documented dimension, says Brewer, it has been impossible to know where one tonotopic representation stopped in auditory cortex and the next began.
“It is important to accurately define these divisions in order to understand how auditory cortex is organized and thus how the information available to higher processes like language is coded,” she says.
Previous studies in cats and macaque monkeys have identified a potential second acoustic dimension –temporal receptive fields, or periodotopy - but none have been able to measure periodotopic gradients in auditory cortex.
Teaming with UCI cognitive neuroscientists and hearing experts Gregory Hickok and Kourosh Saberi, Brewer and graduate students Brian Barton and Jonathan Venezia developed a new variation of the traveling wave method, a tried and tested vision technique that has never been adapted to auditory research. Together, they discovered that these two acoustic dimensions in combination define auditory field maps and represent the basic coordinates of the human auditory system.
Human subjects in the 3T fMRI scanner at the UCI Research Imaging Center were presented with tones that progressed through the auditory coordinates from low to high frequency repeatedly. Knowing when the stimulus was scheduled to occur allowed the researchers to track the finely-organized areas of the brain that activated when each particular sound was played. Narrowband noise – which sounds tone-like – was used to chart the tonotopic gradients, while broadband noise – which is rhythm-like – charted periodotopic gradients.
“We discovered that the acoustic dimensions we measured are represented in human auditory cortex in perpendicular gradients, which allowed us for the first time to have a very clear picture of the organization of this region,” says Brewer.
Combining the plotted points of each gradient, they were able to define the boundaries of 11 auditory field maps that explain the location and organization of the human auditory core and belt regions.
They also found a common organizational pattern between the human visual and auditory cortex.
“Field maps in both sensory cortices fall into a macro-structural pattern of a ‘clover leaf’ cluster, suggesting there is a basic framework for processing across sensory systems,” Barton says.
“These findings fundamentally change the field of auditory research,” Hickok says.
“We can now begin to explore how the cortex changes in those with peripheral auditory diseases like deafness and how cortical implants can be better designed. If we know what this map looks like, we can possibly stimulate sound in areas where it can’t be heard,” adds Barton.
Further research will be directed at determining higher order processes to understand which computations are performed in each mapped area.
Funding for this study was provided by the National Institutes of Health, the University of California, Irvine, Center for Hearing Research, and University of California, Irvine, startup funds.
-Heather Ashbach, Social Sciences Communications