AI and human scientists collaborate to discover new cancer drug combinations

An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs – used to treat conditions such as high cholesterol and alcohol dependence – could also be effective at treating cancer, a promising new approach to drug discovery.

The research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to identify hidden patterns buried in the mountains of scientific literature to identify potential new cancer drugs.

To test their approach, the researchers prompted GPT-4 to identify potential new drug combinations that could have a significant impact on a breast cancer cell line commonly used in medical research. They instructed it to avoid standard cancer drugs, identify drugs that would attack cancer cells while not harming healthy cells, and prioritise drugs that were affordable and approved by regulators.

The drug combinations suggested by GPT-4 were then tested by human scientists, both in combination and individually, to measure their effectiveness against breast cancer cells.

In the first lab-based test, three of the 12 drug combinations suggested by GPT-4 worked better than current breast cancer drugs. The LLM then learned from these tests and suggested a further four combinations, three of which also showed promising results.

The results, reported in the Journal of the Royal Society Interface, represent the first instance of a closed-loop system where experimental results guided an LLM, and LLM outputs – interpreted by human scientists – guided further experiments. The researchers say that tools such as LLMs are not replacement for scientists, but could instead be supervised AI researchers, with the ability to originate, adapt and accelerate discovery in areas like cancer research.

Often, LLMs such as GPT-4 return results that aren't true, known as hallucinations. But in scientific research, hallucinations can sometimes be a benefit, if they lead to new ideas that are worth testing.

"Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn't thought of before," said Professor Ross King from Cambridge's Department of Chemical Engineering and Biotechnology, who led the research. "This can be useful in areas such as drug discovery, where there are many thousands of compounds to search through."

Based on the prompts provided by the human scientists, GPT-4 selected drugs based on the interplay between biological reasoning and hidden patterns in the scientific literature.

This is not automation replacing scientists, but a new kind of collaboration. Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner-rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach."

Dr. Hector Zenil, co-author from King's College London

The hallucinations – normally viewed as flaws – became a feature, generating unconventional combinations worth testing and validating in the lab. The human scientists inspected the mechanistic reasons the LLM found to suggest these combinations in the first place, feeding the system back and forth in multiple iterations.

By exploring subtle synergies and overlooked pathways, GPT-4 helped identify six promising drug pairs, all tested through lab experiments. Among the combinations, simvastatin (commonly used to lower cholesterol) and disulfiram (used in alcohol dependence) stood out against breast cancer cells. Some of these combinations show potential for further research in therapeutic repurposing.

These drugs, while not traditionally associated with cancer care, could be potential cancer treatments, although they would first have to go through extensive clinical trials.

"This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time," said Zenil.

"The capacity of supervised LLMs to propose hypotheses across disciplines, incorporate prior results, and collaborate across iterations marks a new frontier in scientific research," said King. "An AI scientist is no longer a metaphor without experimental validation: it can now be a collaborator in the scientific process."

The research was supported in part by the Alice Wallenberg Foundation and the UK Engineering and Physical Sciences Research Council (EPSRC).

Source:
Journal reference:

Abdel-Rehim, A., et al. (2025) Scientific Hypothesis Generation by Large Language Models: Laboratory Validation in Breast Cancer Treatment. Journal of The Royal Society Interface. doi.org/10.1098/rsif.2024.0674.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
TTF1 identified as key biomarker for advanced KRAS G12C-mutated non-small cell lung cancer