Algorithms are as good as human evaluators at identifying signs of mental health issues in texts

NewsGuard 100/100 Score

UW Medicine researchers have found that algorithms are as good as trained human evaluators at identifying red-flag language in text messages from people with serious mental illness. This opens a promising area of study that could help with psychiatry training and scarcity of care.

The findings were published in late September in the journal Psychiatric Services.

Text messages are increasingly part of mental health care and evaluation, but these remote psychiatric interventions can lack the emotional reference points that therapists use to navigate in-person conversations with patients.  

The research team, based in the Department of Psychiatry and Behavioral Sciences, used natural language processing for the first time to help detect and identify text messages that reflect "cognitive distortions" that might slip past an undertrained or overworked clinician. The research also could eventually help more patients find care.

When we're meeting with people in person, we have all these different contexts. We have visual cues, we have auditory cues, things that don't come out in a text message. Those are things we're trained to lean on. The hope here is that technology can provide an extra tool for clinicians to expand the information they lean on to make clinical decisions."

Justin Tauscher, Paper's Lead Author and Acting Assistant Professor, University of Washington School of Medicine

The study examined thousands of unique and unprompted text messages between 39 people with serious mental illness and a history of hospitalization and their mental health providers. Human evaluators graded the texts for several cognitive distortions as they typically would in the setting of patient care. The evaluators look for subtle or overt language that suggests the patient is overgeneralizing, catastrophizing or jumping to conclusions, all of which can be clues to problems.  

The researchers also programmed computers to do the same task of grading texts, and found that humans and AI graded similarly in most of the categories studied.  

"Being able to have systems that can help support clinical decision-making I think is hugely relevant and potentially impactful for those out in the field who sometimes lack access to training, sometimes lack access to supervision or sometimes also are just tired, overworked and burned out and have a hard time staying present in all the interactions they have," said Tauscher, who came to research after a decade in a clinical setting.

Backstopping clinicians would be an immediate benefit, but researchers also see future applications that work in parallel to a wearable fitness band or phone-based monitoring system. Dror Ben-Zeev, director of the UW Behavioral Research in Technology and Engineering Center and a co-author on the paper, said the technology could eventually provide real-time feedback that would cue a therapist to looming trouble.

"In the same way that you're getting a blood-oxygen level and a heart rate and other inputs," Ben-Zeev said, "we might get a note that indicates the patient is jumping to conclusions and catastrophizing. Just the capacity to draw awareness to a pattern of thinking is something that we envision in the future. People will have these feedback loops with their technology where they're gaining insight about themselves."

This work was supported by the Garvey Institute for Brain Health Solutions at the University of Washington School of Medicine, the National Institute of Mental Health (R56-MH-109554) and National Library of Medicine (T15-LM-007442).

Source:
Journal reference:

Tauscher, J.S., et al. (2022) Automated Detection of Cognitive Distortions in Text Exchanges Between Clinicians and People With Serious Mental Illness. Psychiatric Services. doi.org/10.1176/appi.ps.202100692.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Innovative VR sessions tackle mental health challenges in students