Researchers investigate cognitive brain mechanism devoted to reading

NewsGuard 100/100 Score

Letters, syllables, words and sentences--spatially arranged sets of symbols that acquire meaning when we read them. But is there an area and cognitive mechanism in our brain that is specifically devoted to reading? Probably not; written language is too much of a recent invention for the brain to have developed structures specifically dedicated to it.

According to this novel paper published in Current Biology, underlying reading there is evolutionarily ancient function that is more generally used to process many other visual stimuli. To prove it, SISSA researchers subjected volunteers to a series of experiments in which they were shown different symbols and images.

Some were very similar to words, others were very much unlike reading material, like nonsensical three-dimensional tripods, or entirely abstract visual gratings; the results showed no difference between the way participants learned to recognize novel stimuli across these three domains.

According to the scholars, these data suggest that we process letters and words similarly to how to process any visual stimulus to navigate the world through our visual experiences: we recognize the basic features of a stimulus - shape, size, structure and, yes, even letters and words - and we capture their statistics: how many times they occur, how often they present themselves together, how well one predicts the presence of the other.

Thanks to this system, based on the statistical frequency of specific symbols (or combinations thereof), we can recognize orthography, understand it and therefore immerse ourselves in the pleasure of reading.

Reading is a cultural invention, not an evolutionary acquisition

"Written language was invented about 5000 years ago, there was no enough time in evolutionary terms to develop an ad hoc system", explain Yamil Vidal and Davide Crepaldi, lead author and coordinator of the research, respectively, which was also carried out by Eva Viviani, a PhD graduate from SISSA and now post-doc at the university of Oxford, and Davide Zoccolan, coordinator of the Visual Neuroscience Lab, at SISSA, too.

And yet, a part of our cortex would appear to be specialised in reading in adults: when we have a text in front of us, a specific part of the cortex, the left fusiform gyrus, is activated to carry out this specific task. This same area is implicated in the visual recognition of objects, and faces in particular".

Yamil Vidal & Davide Crepaldi, Study Lead Author and Coordinator of the Research, Scuola Internazionale Superiore di Studi Avanzati

On the other hand, explain the scientists, "there are animals such as baboons that can learn to visually recognize words, which suggests that behind this process there is a processing system that is not specific for language, and that get "recycled" for reading as we humans become literate".

Pseudocharacters, 3D objects and abstract shapes to prove the theory

How to shed light on this question? "We started from an assumption: if this theory is true, some effects that occur when we are confronted with orthographic signs should also be found when we are subjected to non-orthographic stimuli. And this is exactly what this study shows".

In the research, volunteers were subjected to four different tests. In the first two, they were shown short "words" composed of few pseudocharacters, similar to numbers or letters, but with no real meaning. The scholars explain that this was done to prevent the participants, all adults, from being influenced in their performance by their prior knowledge.

"We found that the participants learned to recognise groups of letters - words, in this invented language -- on the basis of the frequency of co-occurrence between their parts: words that were made up of more frequent pairs of pseudocharacters were identified more easily".

In the third experiment, they were shown 3D objects that were characterised by triplet of terminal shapes--very much like the invented words were characterised by triplets of letters. In experiment 4, the images were even more abstract and dissimilar from letters. In all the experiments, the response was the same, giving full support to their theory.

From human beings to artificial intelligence: the unsupervised learning

"What emerged from this investigation", explain the authors, "not only supports our hypothesis but also tells us something more about the way we learn. It suggests that a fundamental part of it is the appreciation of statistical regularities in the visual stimuli that surround us".

We observe what is around us and, without any awareness, we decompose it into elements and see their statistics; by so doing, we give everything an identity. In jargon, we call it "unsupervised learning". The more often these elements compose themselves in a precise organisation, the better we will be at giving that structure a meaning, be it a group of letters or an animal, a plant or an object.

And this, say the scientists, occurs not only in children, but also in adults. "There is, in short, an adaptive development to stimuli which regularly occur. And this is important not only to understand how our brain functions, but also to enhance artificial intelligence systems that base their "learning" on these same statistical principles".

Source:
Journal reference:

Yamil Vidal, Y., et al. (2021) A general-purpose mechanism of visual feature association in visual word identification and beyond. Current Biology. doi.org/10.1016/j.cub.2020.12.017.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Brain-computer interface translates ALS patient's brain activity into spoken words