AI models for mental health hit generalizability snag

Scientists from Yale and the University of Cologne were able to show that statistical models created by artificial intelligence (AI) predict very accurately whether a medication responds in people with schizophrenia. However, the models are highly context-dependent and cannot be generalized / published in Science.

In a recent study, scientists have been investigating the accuracy of AI models that predict whether people with schizophrenia will respond to antipsychotic medication. Statistical models from the field of artificial intelligence (AI) have great potential to improve decision-making related to medical treatment. However, data from medical treatment that can be used for training these models are not only rare, but also expensive. Therefore, the predictive accuracy of statistical models has so far only been demonstrated in a few data sets of limited size. In the current work, the scientists are investigating the potential of AI models and testing the accuracy of the prediction of treatment response to antipsychotic medication for schizophrenia in several independent clinical trials. The results of the new study, in which researchers from the Faculty of Medicine of the University of Cologne and Yale were involved, show that the models were able to predict patient outcomes with high accuracy within the trial in which they were developed. However, when used outside the original trial, they did not show better performance than random predictions. Pooling data across trials did not improve predictions either. The study 'Illusory generalizability of clinical prediction models' was published in Science.

The study was led by leading scientists from the field of precision psychiatry. This is an area of psychiatry in which data-related models, targeted therapies and suitable medications for individuals or patient groups are supposed to be determined.

Our goal is to use novel models from the field of AI to treat patients with mental health problems in a more targeted manner."

Dr Joseph Kambeitz, Professor of Biological Psychiatry at the Faculty of Medicine of the University of Cologne and the University Hospital Cologne

"Although numerous initial studies prove the success of such AI models, a demonstration of the robustness of these models has not yet been made." And this safety is of great importance for everyday clinical use. "We have strict quality requirements for clinical models and we also have to ensure that models in different contexts provide good predictions," says Kambeitz. The models should provide equally good predictions, whether they are used in a hospital in the USA, Germany or Chile.

The results of the study show that a generalization of predictions of AI models across different study centres cannot be ensured at the moment. This is an important signal for clinical practice and shows that further research is needed to actually improve psychiatric care. In ongoing studies, the researchers hope to overcome these obstacles. In cooperation with partners from the USA, England and Australia, they are working on the one hand to examine large patient groups and data sets in order to improve the accuracy of AI models and on the use of other data modalities such as biological samples or new digital markers such as language, motion profiles and smartphone usage.

Source:
Journal reference:

Chekroud, A. M., et al. (2024) Illusory generalizability of clinical prediction models. Science. doi.org/10.1126/science.adg8538.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Research finds sugar tax may lower childhood asthma hospital admission rates by 20.9%