AI-based conversational agents show promise in young people's mental health interventions

NewsGuard 100/100 Score

In a recent review published in npj Digital Medicine, researchers examined the current state of research into fully automated conversational agents (CAs)-mediated interventions for the emotional component of mental health among young individuals.

Study: Use of automated conversational agents in improving young population mental health: a scoping review. Image Credit: SewCreamStudio/Shutterstock.comStudy: Use of automated conversational agents in improving young population mental health: a scoping review. Image Credit: SewCreamStudio/Shutterstock.com

Background

Mental health issues are a significant concern for young people, leading to psychosocial difficulties in adulthood.

Technology has emerged as an alternative to face-to-face approaches, with CAs being digital solutions that simulate human interaction using text, speech, gestures, facial expressions, or sensory expressions.

However, fully automated CAs have limitations, such as relying primarily on adult populations and not distinguishing between young and older populations. Most reviews focus on a subcategory of conversational agents based on embodiment level.

About the review

In the present review, researchers explored the potential of automated conversational agents in enhancing the psychiatric well-being of the youth.

The researchers searched PubMed, Web of Science, PsychInfo, Scopus, the Association for Computing Machinery (ACM) Digital Library, and IEEE Xplore in March 2023.

They included primary research studies reporting on the development, usability/feasibility, or evaluation of fully autonomous conversational agents to enhance the psychiatric wellness of individuals aged ≤25 years. All studies belonged to peer-reviewed journals in the English language.

The team excluded secondary research, dissertations, conference proceedings, and commentaries describing or reporting on the general characteristics of human-conversational agent interactions or intervention studies exclusively testing the general features of the human-technology interaction using CAs.

They also excluded research on CA applications to improve cognitive, social, physical, or educational health and those emphasizing CA usage for only monitoring or assessment purposes. In addition, they excluded studies using semi- or non-automated CAs targeting individuals >25 years.

Two independent researchers screened the records, and a third researcher resolved disagreements. Data extracted included general, technological, interventional, and peer-reviewed research characteristics.

General aspects included publication year, country, and authors, whereas technological aspects included the conversational system type, name, communication modality, availability, and embodiment type.

Interventional characteristics assessed included the targeted mental wellness outcome, scope, frequency, duration, theoretical framework, or standalone intervention).

Research characteristics included participants' information, study methodology and design, stage of research, and main results.

Results

Of the 9,905 initially identified records, 6,874 underwent title-abstract screening, and 152 underwent full-text screening. However, only 25 eligible records were analyzed, including 1,707 individuals.

In total, 21 agents were identified, with most being disembodied chatbots, robots, and virtual representations, of which most studies used Paro, Nao, and Woebot.

The dialog system used by the CAs was predominantly machine learning and natural language processing (n=12), with some using predetermined dialog systems and interactions matched and assembled to user input dynamically.

Most CAs targeted anxiety (n=12), followed by depression, psychiatric well-being, general distress, and mood. Most records labeled the conversational agent applications as interventions, focusing on preventive measures for the general public and at-risk individuals.

Nineteen studies reported the duration of interventions, most lasting two to four weeks (eight studies). Seventeen studies reported theoretical frameworks for the interventions, with Cognitive Behavioural Theory (CBT) applied to most interventions, and 14 automated CA applications mentioned positive psychology as their framework.

Other theories included interpersonal theory, person-centered theory, the metacognitive intervention of narrative imagery, motivational interview, transtheoretical approach, dialectical behavioral theory, and emotion-focused theory.

The sample sizes ranged from eight to 234 participants primarily recruited from educational, community, and healthcare settings, with a mean age of 17 years, and 58% were female.

Fifteen studies reported feasibility outcomes, including engagement, retention/adherence rate, acceptability, user satisfaction, system usability, safety, and functionality.

Two studies reported safety issues, with >50% of individuals reporting at least one adverse effect despite high feasibility. Fifteen studies reported anxiety outcomes, with five reporting a significant positive difference compared to controls.

A randomized controlled trial found an improvement in medical procedure-related anxiety for participants undergoing more invasive procedures and with more frequent exposure to medical procedures.

Nine studies reported depression, with five showing a significant difference compared to controls, favoring automated CAs.

In uncontrolled trials, one showed a minimal change in depression scores, and two studies showed a significant improvement in psychological well-being but no significant effect on subjective happiness.

Conclusion

To conclude, based on the review findings, automated CAs can improve mental health outcomes, especially in anxiety and depression; however, further research could improve understanding of their effectiveness and potential limitations.

The field is rapidly expanding, with advanced technical capabilities, especially in high-income countries.

Future reviews should include safety research, address a broad range of clinical problems, include larger sample sizes, and conduct cost-effectiveness studies to inform affordability in low- and middle-income countries.

Journal reference:
Pooja Toshniwal Paharia

Written by

Pooja Toshniwal Paharia

Dr. based clinical-radiological diagnosis and management of oral lesions and conditions and associated maxillofacial disorders.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Toshniwal Paharia, Pooja Toshniwal Paharia. (2024, March 22). AI-based conversational agents show promise in young people's mental health interventions. News-Medical. Retrieved on April 27, 2024 from https://www.news-medical.net/news/20240322/AI-based-conversational-agents-show-promise-in-young-peoples-mental-health-interventions.aspx.

  • MLA

    Toshniwal Paharia, Pooja Toshniwal Paharia. "AI-based conversational agents show promise in young people's mental health interventions". News-Medical. 27 April 2024. <https://www.news-medical.net/news/20240322/AI-based-conversational-agents-show-promise-in-young-peoples-mental-health-interventions.aspx>.

  • Chicago

    Toshniwal Paharia, Pooja Toshniwal Paharia. "AI-based conversational agents show promise in young people's mental health interventions". News-Medical. https://www.news-medical.net/news/20240322/AI-based-conversational-agents-show-promise-in-young-peoples-mental-health-interventions.aspx. (accessed April 27, 2024).

  • Harvard

    Toshniwal Paharia, Pooja Toshniwal Paharia. 2024. AI-based conversational agents show promise in young people's mental health interventions. News-Medical, viewed 27 April 2024, https://www.news-medical.net/news/20240322/AI-based-conversational-agents-show-promise-in-young-peoples-mental-health-interventions.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New study sheds light on the relationship between race and mental health stigma in college students