New research challenges traditional models of language processing

Multiple regions of the brain engage in fast-moving conversations to understand language, UTHealth Houston researchers have discovered, dispelling a prior school of thought that only one region of the brain was responsible for language processing. 

The research, led by Nitin Tandon, MD, professor of neurosurgery at McGovern Medical School at UTHealth Houston and director of the Texas Institute for Restorative Neurotechnologies at UTHealth Houston, was published in PLOS Biology. 

The research has important clinical implications for patients with aphasia, or the inability to speak, as well as dementia and brain injuries.

"An understanding of the neurobiology of language revealed by this study provides a critical framework to understand how humans produce words to represent our feelings and intangible concepts, such as 'justice,' relative to more concrete ideas that are linked to objects," said Tandon, who is the director of the Texas Comprehensive Epilepsy Program, as well as the Nancy, Clive and Pierce Runnells Distinguished Chair in Neuroscience of the Vivian L. Smith Center for Neurologic Research and BCMS Distinguished Professor in Neurological Disorders and Neurosurgery at McGovern Medical School.

This understanding could help us better understand the loss of language in aphasia and could also help in the reconstruction of language with the rapid growth of neural prosthetics."

Nitin Tandon,  Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston

Humans are distinct from most other animals in that they conceive and communicate abstract concepts to relate their feelings, their environment, and their beliefs to each other. To study words belonging to different levels of linguistic complexity, researchers recorded the brain activity directly using implanted electrodes in 19 epilepsy patients who were asked to classify words as either concrete or abstract. 

Concrete words are those that represent physical objects, like the words "sandwich" or "laptop," while abstract words include "yesterday" or "time." The team also looked at words that fell somewhere in between, like "magic" or "profit." 

The team found that concrete words activated regions of the brain that process sensory experiences and regions responsible for language, while abstract words relied more heavily on language-related areas of the brain. For words that fell in between, the team found that the patients' brain responses were stable regardless of individual, subjective ratings. 

"The way that the brain represents conceptual meaning is not as strictly tied to the physical features or the subjective way that you perceive the word as may have been thought previously," said Elliot Murphy, PhD, co-first author of the study. "Even if a person thinks of the word 'magic' in purely physical terms, their brain seems to still activate some of the abstract features associated with the word 'magic.'" 

Additionally, researchers found that whether the participants were reading purely abstract or purely concrete words, multiple regions of the brain communicated with each other to process them. 

The findings suggest there is more to the story than traditional "hub-and-spoke" models, which assume there is one conceptual hub that communicates with multiple "spokes," or smaller brain regions, to decode language. 

"It's not just one hub that seems to be involved in representing the meaning of these words. There's multiple hubs," Murphy said. 

In a separate part of the study, researchers asked participants to classify ambiguous words while they stimulated different parts of the brain with small electrical pulses to temporarily disable their processing. When different regions were stimulated, participants had a harder time making decisions about how to classify the words, reaffirming that multiple areas are responsible for decoding language, Murphy said. 

Other researchers from UTHealth Houston include co-first author Oscar Woolnough, PhD; Cale Morse, former research assistant; and Xavier Scherschligt, former research assistant.

Source:
Journal reference:

Murphy, E., et al. (2026) Frontotemporal network interactions causally support rapid concreteness judgments during reading. PLOS Biology. DOI: 10.1371/journal.pbio.3003723. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3003723.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Can AI chatbots help brain tumor patients understand their care?