AI in Healthcare: How generative tools are transforming clinical practice

Thought LeadersDr Rahul GoyalClinical Executive for EMEALAAP regionElsevier

In this interview, News Medical speaks with Dr Rahul Goyal, Clinical Executive for the EMEALAAP region at Elsevier, about how AI in healthcare reshapes clinical decision-making, patient care, and workflow. Drawing on real cases and clinical experience, Dr Goyal discusses the opportunities, challenges, and best practices for integrating generative AI into everyday medicine.

Can you please introduce yourself and your role in medicine?

I trained originally as an eye physician, where technology is used consistently in workflows. Later, as a general physician, I realized how much I missed technology support in decision-making. Over time, I gravitated toward using AI tools embedded in clinician workflows, bridging the gap between clinical judgment and digital augmentation.

How has your perspective on technology in medicine evolved since you first began practising?

When I started, we didn't have access to much technology. In fact, the only technology I had as one of the first few adopters was the Palm Pilot [handheld personal digital assistant] with a formulary on it. We didn't have online equivalents of the British National Formulary or NICE guidelines back then. So things have moved quite a bit from being patchy support to becoming very integral to decision-making.

I feel healthcare has always been a laggard, but for good reasons like safety, regulatory concerns about patient data privacy, and ethics. The advances in technology have enabled those barriers to be removed to a large extent because of safer data migration and regulatory compliance. Change management has taken quite a while, but I think we are now at the cusp of the next revolution in clinical workflows.

That said, not every technology is a good idea; it has to be used correctly. Clinicians will always ask, “What’s in it for me?” not monetarily but in terms of time saved, reducing cognitive burden, and better patient outcomes. The sweet spot is when clinicians co-design solutions, and are not just passive users.

Can you share any specific “practice-changing moments” where AI directly improved patient care or changed your clinical decision-making?

Yes. One patient was an elderly lady who came to urgent care with a leg infection and pre-existing renal impairment, along with a long medication list including blood thinners and diabetes medication. I was in a situation where I had to make a decision to prioritize treating her leg infection, ensuring that I don't worsen her kidney functions with whatever I prescribe, and there are no interactions with any other medications she was currently taking. Working this out manually for every medication she took would take me at least 45 minutes. I have the privilege of using ClinicalKey AI, a conversational tool with generative AI that provides answers from evidence-based, worldwide, published, and accepted guidelines. I could input her case scenario, just as you would when talking to a colleague, and got a referenced, evidence-based answer in seconds, including when to monitor kidney function, potential interactions, warning signs to caution the patient on, all in one plan. This saved time, an unnecessary follow-up appointment for both her and me, gave her clarity immediately, and improved safety.

Another case that is really close to my heart involved a 7-year-old boy with a recurrent, intermittent limp. MRI scan, blood tests, and other tests all returned normal, yet we couldn't explain why he still limped every three months. What could we be missing? But when we fed the patient's history (Indian ethnicity, recent travel to visit grandparents in India, familial TB exposure) into ClinicalKey AI, it suggested considering tuberculosis. Subsequent tests, and two weeks later, we received a confirmed positive TB diagnosis. Treatment began, and the child recovered fully. In that instance, AI helped deliver a possibility we humans had overlooked due to bias or omission, and gave us a valuable perspective.

Have you noticed changes in the amount or quality of time you can spend with patients since integrating AI into your workflow?

Absolutely. To build trust and rapport as humans, we must have eye contact. Tools that reduce the burden of lengthy note-taking or quickly pull together fragmented data, like we often see in urgent care, free up time for genuine face-to-face interactions.

For example, documentation tools (speech recognition and note drafting) can transcribe history, examination findings, and initial impressions, so I only need to review and correct the notes. On average, I save about four minutes per patient, which is a lot of time in a clinical setting, considering I only get around 10 minutes of time per consultation. With this time saved, I can now use it to reassure, explain, and build rapport.

Patients typically forget around 80% of what’s discussed because of information overload. Freeing up this time means being able to spend time giving patients credible resources to take away, which also improves compliance and trust in the service we provide. With the current workforce pressures and demands, we don’t have the luxury of ignoring these time gains.

Medical professional and patient reading of digital information or advice in clinicImage credit: PeopleImages/Shutterstock.com

Were there any early challenges or resistance points among colleagues, and how did you address them?

Yes. Skepticism, fear of “black box” AI, and suspicion that AI might replace clinicians.

Many clinicians want to look “under the hood” to see why an AI arrived at a conclusion. The key is transparency: Design tools as a “glass box,” not opaque black boxes, so clinicians can understand the context, sources, logic, and limitations of the tools they use.

Also, we must emphasize that AI is assistive, not a replacement. AI is very dependent on what prompts it is provided with. An inexperienced person may not know what is clinically relevant, and so the AI-generated output will be completely different. This is why AI must have human intervention. The more we share successful examples and our practice-changing moments, and allow clinicians to become co-creators of tools, the more adoption spreads. It becomes part of the clinical conversation: “Why was this suggestion made? How does it integrate with my judgement?”

How do you explain AI-supported decisions to patients in a way that builds trust and maintains transparency?

I treat AI as another reference tool, just one that can provide evidence immediately. When discussing options, I sometimes query the AI in real time and show the patient what the evidence says.

For example, when debating managing a gallstone for a patient, we used AI to compare the risks of immediate interventions vs extended observation, based on the size and complications of both options. Then I explained to the patient that I did this to bring the best evidence to the decision. Consequently, the referral letter that I wrote to the consultant used this evidence. I could impress upon the consultant the decision to refer the patient, reducing the time for a decision, not having to send a letter back and forth, or the referral being rejected. So, time to care is reduced, and you're reinforcing the strength of transparent AI, both to a colleague and a patient.

It helps to frame it this way: the clinician remains in control, and the AI is there to assist. The patient can see we are not guessing; we are referencing curated, validated suggestions, and I can explain why I trust or doubt the AI’s suggestions.

How do you see AI reshaping teamwork and communication within multidisciplinary care teams?

Diverse specialities converge in a multidisciplinary team (MDT). For a case patient, we might have a cardiologist, a rehab nurse, and an oncologist working on one patient. This is where AI can “speak all their languages” by summarizing relevant evidence for each speciality and integrating perspectives.

A trial in the Middle East is comparing human MDT consensus to ClinicalKey AI’s suggestions in ICU cases. The AI provides referenced reasoning faster, helps justify insurance approvals or diagnostic steps, and lets the team move more efficiently. Thus, AI can reduce friction, speed decision-making, and democratize insight across specialties.

High angle view of diverse medical team discussing x-ray report over digital tablet at table in hospitalImage credit: wavebreakmedia/Shutterstock.com

The UCL study pointed out delays and governance hurdles. What policy or structural changes would accelerate safe AI rollout across the NHS?

I think the NHS is in a good position to move fast because it has a unified structure. The study highlights that AI tools should have an agreed-upon set of standards that are validated and assessed with rigor for compliance with regulations. This ensures that the tools deployed are to a high-quality set of data standards.

The other important factor was how to streamline governance and interoperability between various NHS trusts, allowing data use while maintaining privacy regulations and ethical concerns. Strategy gets eaten by culture. Collaborative ecosystems need to exist where the community, clinicians, and commissioners come together, building a solid foundation of trust to remove any doubt about what and why information is and is not shared. I think the NHS is in a strong position to be in that place and set an example for the rest of the world.

What misconceptions about AI in healthcare do you encounter most often among your peers, and how do you address them?

One big misconception is that all AI is the same; that any tool is interchangeable. But there’s a spectrum: some AI is generic, like helping to draft emails, and others are responsibly built for clinical workflows that use evidence-based decision support that can be leveraged. The same tools can have different uses, which needs to be impressed upon people. For example, a mobile phone is used to find information, but you also use it to play a game. It depends on your intended use.

Also, hype leads to overpromising: tools that don’t connect to real clinical workflows tend to fail. Many tools have been built, but the ones that have survived are validated and use evidence-based guidelines and peer-reviewed materials.

Another fear is that AI will replace clinicians. I emphasize that AI is assistive; it supports, not replaces. I also stress that clinicians should take co-ownership of the built tools, ensuring that educators, leaders, and users are in the loop to evaluate and guide AI evolution.

Finally, if you could give one piece of advice to NHS hospitals just beginning their AI journey, what would it be?

Start by identifying actual ‘pain points’ in your workflow. Every trust will have some commonalities, and everyone will have particular quirks depending on their service. Pick one or two high-impact tasks (e.g., documentation or evidence lookup) and test AI solutions.

Ensure you think broadly about including staff at all levels (doctors, nurses, admin). Aim for one tool, one login, and make it easy for people to use. Consider how you can bring standardization to reduce wastage, reduce time spent working, and increase the time for patient care. Don’t just adopt the technology because another trust is doing it. The best implementations come when technology is tightly aligned to your real problems.

But if I had to pick one thing, I would identify what works for you and scale from there.

Where can readers find more information?

  • To learn more about ClinicalKey AI, visit here.
  • To explore how clinicians can apply ClinicalKey AI across various clinical scenarios, visit here.
  • If you’d like to build your knowledge and skills to effectively leverage generative AI, check out Elsevier’s Gen AI Academy for Health – CME-accredited course – here.

About the Researcher

Headshot: Dr Rahul Goyal

In 2023, Dr Goyal joined Elsevier as the Clinical Executive for the EMEALAAP region. Prior to joining Elsevier, he served as the Senior Vice President of Clinical Engagement & Adoption at Malaffi, the Abu Dhabi Health Information Exchange. His earlier experience includes serving as Chief Medical Information Officer at Mediclinic, where he led physician-focused digital transformation initiatives. These included the implementation and adoption of electronic health records, single sign-on systems, voice-to-text technologies, and clinical decision support tools.

In addition to his leadership roles, Dr Goyal continues to practice as a Family Physician in the UK. With nearly two decades of clinical experience, he has worked as a partner in multiple UK-based practices. Before completing his training in Family Medicine, he trained as a higher specialist in ophthalmology, gaining expertise as an eye surgeon.

Dr Goyal was recognised as one of the “Future 50 Clinical Leaders in Healthcare IT” by HIMSS in 2019. He is a member of both the Royal College of General Practitioners and the Royal College of Ophthalmologists in the UK. In 2020, he was appointed Adjunct Clinical Assistant Professor at the Mohammed Bin Rashid University in Dubai, UAE.

Source:
Lauren Hardaker

Written by

Lauren Hardaker

Lauren holds a master’s degree in Medical Microbiology from the University of Manchester, where they also worked as a research assistant with the Manchester Fungal Infection Group. Following their passion for science communication, she trained to become a high school science teacher, focusing on curriculum development of disciplinary knowledge.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Hardaker, Lauren. (2025, November 03). AI in Healthcare: How generative tools are transforming clinical practice. News-Medical. Retrieved on November 03, 2025 from https://www.news-medical.net/news/20251103/AI-in-Healthcare-How-generative-tools-are-transforming-clinical-practice.aspx.

  • MLA

    Hardaker, Lauren. "AI in Healthcare: How generative tools are transforming clinical practice". News-Medical. 03 November 2025. <https://www.news-medical.net/news/20251103/AI-in-Healthcare-How-generative-tools-are-transforming-clinical-practice.aspx>.

  • Chicago

    Hardaker, Lauren. "AI in Healthcare: How generative tools are transforming clinical practice". News-Medical. https://www.news-medical.net/news/20251103/AI-in-Healthcare-How-generative-tools-are-transforming-clinical-practice.aspx. (accessed November 03, 2025).

  • Harvard

    Hardaker, Lauren. 2025. AI in Healthcare: How generative tools are transforming clinical practice. News-Medical, viewed 03 November 2025, https://www.news-medical.net/news/20251103/AI-in-Healthcare-How-generative-tools-are-transforming-clinical-practice.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Experts call for coordinated global action to make the most of Alzheimer’s treatments