Chatbots for mental health pose new challenges for US regulatory framework

NewsGuard 100/100 Score

In a recent review published in Nature Medicine, a group of authors examined the regulatory gaps and potential health risks of artificial intelligence (AI)-driven wellness apps, especially in handling mental health crises without sufficient oversight.

Study: The health risks of generative AI-based wellness apps. Image Credit: NicoElNino/Shutterstock.comStudy: The health risks of generative AI-based wellness apps. Image Credit: NicoElNino/Shutterstock.com

Background 

The rapid advancement of AI chatbots such as Chat Generative Pre-trained Transformer (ChatGPT), Claude, and Character AI is transforming human-computer interaction by enabling fluid, open-ended conversations.

Projected to grow into a $1.3 trillion market by 2032, these chatbots provide personalized advice, entertainment, and emotional support. In healthcare, particularly mental health, they offer cost-effective, stigma-free assistance, helping bridge accessibility and awareness gaps.

Advances in natural language processing allow these 'generative' chatbots to deliver complex responses, enhancing mental health support.

Their popularity is evident in the millions using AI 'companion' apps for various social interactions. Further research is essential to evaluate their risks, ethics, and effectiveness.

Regulation of generative AI-based wellness apps in the United States (U.S.)

Generative AI-based applications, such as companion AI, occupy a regulatory gray area in the U.S. because they are not explicitly designed as mental health tools but are often used for such purposes.

These apps are governed under the Food and Drug Administration’s (FDA)’s distinctions between 'medical devices' and 'general wellness devices.' Medical devices require strict FDA oversight and are intended for diagnosing, treating, or preventing disease.

In contrast, general wellness devices promote a healthy lifestyle without directly addressing medical conditions and thus do not fall under stringent FDA regulation.

Most generative AI apps are classified as general wellness products and make broad health-related claims without promising specific disease mitigation, positioning them outside the stringent regulatory requirements for medical devices.

Consequently, many apps using generative AI for mental health purposes are marketed without FDA oversight, highlighting a significant area in regulatory frameworks that may require reevaluation as technology progresses.

Health risks of general wellness apps utilizing generative AI

The FDA’s current regulatory framework distinguishes general wellness products from medical devices, a distinction not fully equipped for the complexities of generative AI.

This technology, featuring machine learning and natural language processing, operates autonomously and intelligently, making it hard to predict its behavior in unanticipated scenarios or edge cases.

Such unpredictability, coupled with the opaque nature of AI systems, raises concerns about potential misuse or unexpected outcomes in wellness apps marketed for mental health benefits, highlighting a need for updated regulatory approaches.

The need for empirical evidence in AI chatbot research

Empirical studies on mental health chatbots are still nascent, mostly focusing on rule-based systems within medical devices rather than conversational AI in wellness apps.

Research highlights that while scripted chatbots are safe and somewhat effective, they lack the personalized adaptability of human therapists.

Additionally, most studies examine the technological constraints of generative AI, like incorrect outputs and the opacity of "black box" models, rather than user interactions.

There is a crucial lack of understanding regarding how users engage with AI chatbots in wellness contexts. Researchers propose analyzing real user interactions with chatbots to identify risky behaviors and testing how these apps respond to simulated crisis scenarios.

This dual-step approach involves direct analysis of user data and "app audits," but is often hindered by data access restrictions imposed by app companies.

Studies show that AI chatbots frequently mishandle mental health crises, underscoring the need for improved response mechanisms. 

Regulatory challenges of generative AI in non-healthcare uses

Generative AI applications not intended for mental health can still pose risks, necessitating broader regulatory scrutiny beyond current FDA frameworks focused on intended use.

Regulators might need to enforce proactive risk assessments by developers, especially in general wellness AI applications.

Additionally, the potential health risks associated with AI apps call for clearer oversight and guidance. An alternative approach could include tort liability for failing to manage health-relevant scenarios, such as detecting and addressing suicidal ideation in users.

These regulatory measures are crucial to balance innovation with consumer safety in the evolving landscape of AI technology.

Strategic risk management in generative AI wellness applications

App managers in the wellness industry utilizing generative AI must proactively manage safety risks to avoid potential liabilities and prevent brand damage and loss of user trust.

Managers must assess whether the full capabilities of advanced generative AI are necessary or if more constrained, scripted AI solutions would suffice.

Scripted solutions provide more control and are suited for sectors requiring strict oversight like health and education, offering built-in guardrails but possibly limiting user engagement and future growth.

Conversely, more autonomous generative AI can enhance user engagement through dynamic and human-like interactions but increases the risk of unforeseen issues. 

Enhancing safety in generative AI wellness apps

Managers of AI-based wellness applications should prioritize user safety by informing users they are interacting with AI, not humans, equipping them with self-help tools, and optimizing the app's safety profile.

While basic steps include informing and equipping users, the ideal approach involves all three actions to enhance user welfare and mitigate risks proactively, safeguarding both consumers and the brand.

Journal reference:
Vijay Kumar Malesu

Written by

Vijay Kumar Malesu

Vijay holds a Ph.D. in Biotechnology and possesses a deep passion for microbiology. His academic journey has allowed him to delve deeper into understanding the intricate world of microorganisms. Through his research and studies, he has gained expertise in various aspects of microbiology, which includes microbial genetics, microbial physiology, and microbial ecology. Vijay has six years of scientific research experience at renowned research institutes such as the Indian Council for Agricultural Research and KIIT University. He has worked on diverse projects in microbiology, biopolymers, and drug delivery. His contributions to these areas have provided him with a comprehensive understanding of the subject matter and the ability to tackle complex research challenges.    

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Kumar Malesu, Vijay. (2024, May 01). Chatbots for mental health pose new challenges for US regulatory framework. News-Medical. Retrieved on May 15, 2024 from https://www.news-medical.net/news/20240501/Chatbots-for-mental-health-pose-new-challenges-for-US-regulatory-framework.aspx.

  • MLA

    Kumar Malesu, Vijay. "Chatbots for mental health pose new challenges for US regulatory framework". News-Medical. 15 May 2024. <https://www.news-medical.net/news/20240501/Chatbots-for-mental-health-pose-new-challenges-for-US-regulatory-framework.aspx>.

  • Chicago

    Kumar Malesu, Vijay. "Chatbots for mental health pose new challenges for US regulatory framework". News-Medical. https://www.news-medical.net/news/20240501/Chatbots-for-mental-health-pose-new-challenges-for-US-regulatory-framework.aspx. (accessed May 15, 2024).

  • Harvard

    Kumar Malesu, Vijay. 2024. Chatbots for mental health pose new challenges for US regulatory framework. News-Medical, viewed 15 May 2024, https://www.news-medical.net/news/20240501/Chatbots-for-mental-health-pose-new-challenges-for-US-regulatory-framework.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Pressure to be "perfect" leads to unhealthy impacts on both parents and children, study finds