Wired for Words: The Neural Architecture of Language

Wired for Words: The Neural Architecture of Language
- July 8, 2025
- New book by UC Irvine cognitive and language scientist Gregory Hickok explores 150 years of research on how the brain enables language
-----
In his new book, Wired for Words: The Neural Architecture of Language (MIT Press), Gregory Hickok, UC Irvine Distinguished Professor of language science
and cognitive sciences and department chair of language science, provides an in-depth
review and synthesis of existing research on the brain’s networks that enable communication
through language. Spanning a century and a half of discoveries that pinpoint both
progress and problems in the study of this uniquely human cognitive ability, Hickok’s
work offers a new understanding of how language works in the mind and brain. Below,
he shares what motivated his inquiry and its potential impacts in clinical work –
including neurosurgery and the development of neural prostheses – speech therapy approaches,
artificial intelligence and more.
NOTE: Wired for Words will be available in November. Barnes & Noble is running a preorder sale July 8-11, offering members 25% off the purchase price. Learn more here.
Q: What sparked your interest in revisiting and reexamining the neural architecture of language—and what key questions were you hoping to answer through this book?
A: Humans are elite linguistic athletes. We control and coordinate a hundred or so muscles in our vocal tract to spew out words at a rate of about 2 per second when speaking, and, when listening, we easily decode the rapid-fire mixture of sound pressure waves back into words and thoughts. This ability seems simple to us, yet we still don’t understand how this feat is accomplished by our brain. As a language scientist, it is a problem that I’ve been studying my whole career. My particular approach to this problem is to uncover the brain’s circuit diagram for language, its neural architecture. If we can map the components of the language network and their connections, this should help us understand how the system works, how it can break down in development and disease, and how it might be rehabilitated. My goal in this book was to synthesize the last 150 years of research on the topic, including much of my own, to develop the most advanced model possible for the neural architecture of language.
Q: You synthesize more than 150 years of research across disciplines—from neurology to linguistics to engineering. What major discoveries or shifts in understanding emerged for you through this process?
A: One of the major insights is that the architecture of language networks exhibits a surprising similarity to nonlinguistic networks in our brain, such as those involved in the motor control of reaching and grasping. While linguistic and nonlinguistic systems are quite distinct in the brain and involve their own specializations, the neural architecture is similar enough to indicate that there is an evolutionary connection. This allows for more synergy and transfer of knowledge between linguistics and other fields of study in human cognition, potentially accelerating the pace of progress. Another major insight is that we have been overestimating the degree of asymmetry between the brain’s two hemispheres. Yes, there are differences, but the two hemispheres are vastly more similar in function than they are different, including for language. Moreover, different aspects of language ability have different degrees of hemispheric asymmetry with some being quite bilateral despite decades old dogma pointing to strong left dominance.
Q: In challenging some long-standing assumptions—such as the strong localization of all aspects of language in the left hemisphere—what do you see as the most pressing myths that your research helps dismantle?
A: The language-equals-left-hemisphere myth is probably the most strongly engrained and in need of reevaluation. Even the idea of hemispheric dominance in general could use some rethinking. Using language as a test case, I’ve changed my own views of left-right asymmetries in the brain. For example, if you find 100 people who end up with language deficits, aphasia, following a stroke, you might find that 95% of them have left hemisphere damage. You might then infer that 95% of humans are strongly left dominant for language, which is the typical view. But research from my own lab suggests an alternative. It could be that by sampling only people with aphasia to make our asymmetry estimates, we are ignoring an entire population of people who are more bilaterally organized and so don’t end up with significant language problems following left or right hemisphere damage. My own evaluation of this possibility involved reanalyzing data from word comprehension ability--assumed by many to be strongly left dominant--following left or right hemisphere disruption. I found that more than 50% of our sample were symmetric in their ability despite a preponderance of left hemisphere involvement among those with significant deficits.
Q: How might this revised understanding of how the brain processes language influence future directions in fields like education, clinical practice, or even artificial intelligence?
A: A range of finer-grained details that I discuss in the book could have significant clinical impacts, particularly for neurosurgery, where a detailed understanding of language circuitry is important, and for the development of neural prostheses, which is a technology that is advancing quickly. It’s hard to navigate these clinical landscapes without a detailed map of the system. Several of the new advances discussed in the book could also impact speech therapy approaches. For example, one new discovery is the foundational role of prosody, the song-like quality of language, in speech production planning. Prosody is relatively neglected as a target for speech disorder assessment and therapy, but this new research suggests that it is a critical new direction to explore. In terms of artificial intelligence, engineers may find it fruitful to consider more human-like neural architectures for language models, particularly if they move beyond text-based models into actual speech.
Q: What do you hope readers—especially those working in related fields—take away from Wired for Words, and how might it shape their future thinking or research?
A: Language is often considered to be a completely unique ability in the realm of human cognition. It is certainly unique—we are the only species who has it—but a closer look reveals important homologies to the rest of the mind and brain. My own understanding of how language works has benefited by looking at how other systems in the brain work; I’ve found important insights in these other fields for language function. I’m certain that the flow of knowledge can be bidirectional. So, my hope is that by learning more about the neural architecture of language, researchers in and connoisseurs of other domains of the mind and brain can gain new insights. In fact, I believe that if we can figure out how the brain enables language, we will have pretty much solved the broader problem of how the brain enables the mind.
-----
Would you like to get more involved with the social sciences? Email us at communications@socsci.uci.edu to connect.
Share on:

