New study could reveal the complex interaction between languages and human beings who use them

NewsGuard 100/100 Score

New research is helping scientists around the world understand what drives language change, especially when languages are in their infancy. The results will shed light on how the limitations of the human brain change language and provide an understanding of the complex interaction between languages and the human beings who use them.

The project is funded by a $344,000 National Science Foundation grant and is led by principal investigator Matthew Dye, an assistant professor and director of the Deaf x Laboratory at Rochester Institute of Technology's National Technical Institute for the Deaf.

Dye and his research team are examining Nicaraguan Sign Language, which was "born" in the 1970s. Using machine learning and computer vision techniques, the team is looking at old video recording of the language and measuring how it has changed over the past 40 years. The recent birth and rapid evolution of Nicaraguan Sign Language has allowed them to study language change from the beginning, on a compressed time scale. They are asking whether languages change so they are easier to produce, or whether they change in ways that make them easier for others to understand. Initial results challenge a long-held notion that signs move toward the face in order to be easier to understand.

"Languages change over time, such that the way we speak English now is very different than the speech patterns of elder generations and our distant ancestors," said Dye. "While it is well documented that languages change over time, we're hoping to answer some fundamental theoretical questions about language change that cannot be addressed by simply analyzing historical samples of spoken languages."

Dye explains that by using an existing database of Nicaraguan Sign Language, composed of 2D videos of four generations of Nicaraguan signers, his research team will be able to assess the extent to which linguistic changes occur and why. The team will also create computational tools that allow 3D human body poses to be extracted from the 2D videos.

Ultimately, these tools could be used to aid in the development of automated sign-language recognition, promoting accessibility for deaf and hard-of-hearing people, and for developing automated systems for recognizing and classifying human gestures. In addition, Dye says that deaf and hard-of-hearing students will participate in the research, helping to increase the diversity of the nation's scientific workforce.

"We are fortunate that our study enables us to utilize the visual nature of sign language to gain a greater understanding of how all languages may evolve," adds Dye.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New research sheds light on how GLP-1 obesity drugs may change food cravings