New research into stimulus-related ear differences from birth may provide improved methods for newborn hearing tests

NewsGuard 100/100 Score

Researchers have known since the 1950s that humans process speech and tone sounds in different sides of the brain.

Generally speaking, the left side of the brain processes speech and performs sophisticated language functions. It excels at dealing with rapid, repetitive sounds. And, generally speaking, the right side of the brain is the primary processor of tonal sounds. It excels at hearing pitch, or sound frequency, and interpreting music.

There's an elegant, crossed-pathway of auditory neurons, or nerve cells, that link ears and sound-processing centers in the brain. The right ear auditory nerve pathway leads to the left hemisphere's auditory cortex. Therefore, the right ear reacts faster and more accurately to speech-type stimuli than the left ear does. Conversely, the left ear auditory nerve pathway connects to the right hemisphere's auditory cortex, so it is the preferred ear for hearing music.

Babies aren't born with these neural pathways connecting the ear to the cortex, however. These pathways become apparent only after infants are at least four month old.

So two scientists who screened thousands of newborns as part of a project to optimize infant hearing tests were startled when they realized their data show that infants do have stimulus-related ear differences from birth.

"We don't think that these differences are anatomical," said Barbara Cone-Wesson, an associate professor in the University of Arizona's speech and hearing sciences department. "There are really no physical differences between right and left ears, although no one's really looked at anatomical differences between right and left brain stems. We think a neural pathway much lower in the brain could be causing the stimulus-related ear differences in newborns."

Yvonne S. Sininger of the UCLA School of Medicine and Cone-Wesson report the discovery in the Sept. 10 issue of Science.

Both were involved in a major, multi-center research project that tested hearing in more than 7,000 infants across the country in the 1990s. The project goal was to find the best tool for screening newborns for hearing impairment. Cone-Wesson, who directed the project in Los Angeles County, tested more than 3,500 infants at her hospital.

And, after it all, Cone-Wesson said, "We had a bucketload of data" on newborn babies' inner ear and brain wave responses to different kinds of stimuli.

She and Sininger analyzed the data on infants' responses to two different kinds of 'otoacoustic emissions.' Otoacoustic emissions are sounds that are generated by the inner ear itself in the process of hearing. They can occur in an ear which is completely cut off from the rest of the brain, so long as the inner-ear hair cell system is intact.

Sininger and Cone-Wesson found that healthy, non-hearing-impaired infants responded with larger otoacoustic emissions in their right ears when presented with clicks, and with larger otoacoustic emissions in their left ears when presented with tones. That is, their right inner ears were more responsive to timed, speech-like noises while their left inner ears were more responsive to continuous tones, or musical, pitch-related sounds.

So why do newborns respond this way, having not yet developed a strong connection from ear to cortex?

"We think that their otoacoustic emissions are modulated by a feedback loop called the 'olivo-cochlear bundle,' a neural pathway in the lower part of the brain," Cone-Wesson said. "These stimulus-related differences that are tied to the different sides of the brain have never been found at such a low level of the auditory system before."

Their findings suggest that at early stages of auditory system development, the cochlea and brainstem process sound. The cochlea, a spiral tube of the inner ear containing nerve endings essential for hearing, and brainstem may play an important role in developing the brain's specialized right and left side auditory processing centers.

"What I find refreshing about this is, we were doing clinical research," Cone-Wesson said, "trying to optimize stimuli and recording parameters to find the best possible methods for newborn hearing screenings. This discovery really came about in terms of trying to optimize a clinical procedure."


The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Placental DNA methylation patterns altered by pregnancy air pollution exposure, research reveals