New tool identifies harmful nutrition misinformation and evaluates potential risk

A new tool that not only identifies diet and nutrition misinformation online but also evaluates the content's risk for potential harm has been developed by a team of UCL researchers. 

Unlike existing tools, which offer binary judgements of whether content is 'true' or 'false', this first-of-its-kind tool addresses misinformation that is not overtly false but still has the potential to dangerously mislead, particularly among vulnerable groups. 

The tool's developers identified that 'true' or 'false' assessments fail to capture the cumulative and contextual ways in which misleading health information can influence behaviour and decision-making. 

Health misinformation spread online presents a major public health threat, according to the WHO. From restrictive diets and extreme fasting to the unsafe use of dietary supplements (estimated to account for 20% of drug-induced liver injuries in the US alone), misinformation can have disastrous, sometimes fatal, consequences. 

When it comes to diet and nutrition, misinformation often operates through selective framing that masks potential health risks. Harmful misleading content tends to fly under fact-checkers' radars and escape meaningful oversight until high-profile cases make the headlines." 

Alex Ruani, lead author and developer, UCL Institute of Education 

The tool, called Diet-Nutrition Misinformation Risk Assessment Tool or Diet-MisRAT, is a rule-based content analysis model that adapts the World Health Organisation's (WHO) approach to assessing hazardous exposures in physical settings to digital information environments. It treats online content as the 'medium' and its misleading traits as 'risk agents', known to increase recipient susceptibility. It ranks material as green, amber or red according to a weighted misinformation risk score. 

Within this framework, the risk for potential harm depends on the content, context and how likely the recipient is to be misled. By broadening the definition of misinformation beyond the factually false, this tool will help policymakers, digital platforms and regulators implement safeguards, prioritize their responses and take proportional action when faced with harmful misleading content. 

Diet-MisRAT's results were tested and calibrated through five rounds of verification, including against the judgements of nearly 60 specialists in dietetics, nutrition and public health. The testing showed the tool to deliver highly reliable assessments. The process also identified the core traits of misinformation (inaccuracy; hazardous omissions; manipulative framing) and the indicators that increase the risk potential (method and conditions in which the content is consumed; prominence). 

For example, when assessing content containing claims such as 'it is safer to give your child high-dose vitamin A than the MMR vaccine', the tool classifies this into a critical risk tier as it presents false safety framing, omits risks of excessive vitamin A dosing and undermines public health guidance, increasing the likelihood of harmful real-world decisions. 

Co-author Professor Anastasia Kalea (UCL Division of Medicine) said: "It is essential to include specialist expertise when assessing misinformation risk. Our tool was calibrated and validated with feedback from nearly 60 subject-matter experts. This helps ensure that assessments of potential harm reflect appropriate professional judgement." 

By isolating misleadingness features and linking them to potential recipient outcomes, the researchers were able to paint a picture of what makes content high-risk and what traits determine the scale of the impact. 

Examples of harm associated with the online spread of health misinformation includes the case in 2025 of cholesterol-induced skin lesions diagnosed in a man who had adopted a carnivore diet, a trend disproportionately amplified by social media algorithms, particularly within 'manosphere' communities. 

Another example was the reported case of a person being hospitalised weeks after following incorrect AI-generated advice to replace sodium chloride (salt) with sodium bromide, a substance with no dietary role and which is toxic if regularly ingested over time. Online misinformation has also been linked to decisions to abandon life-saving cancer treatment in favour of unproven dietary alternatives. 

This study contributes to ongoing discussions about how digital platforms, public health authorities and policymakers should respond to the growing influence of misleading health advice online, especially across social media, search summaries and generative AI. 

Ruani said: "In public health we assess exposure to risk factors. We believe misleading health information should be treated in the same way. Some misinformation can lead to serious harm, so mitigation strategies should be proportionate to the level of risk. The more severe the potential harm, the stronger the response should be. 

"When AI chatbots speak confidently, users may assume their advice is safe. If we can properly measure how misleading a piece of advice is and how much harm it may pose, we can build stronger safeguards into models and AI agents before deployment rather than reacting after harm occurs." 

Co-author Professor Michael Reiss (UCL Institute of Education) said: "By spelling out the typical patterns that distort diet, nutrition or supplement information, the tool's risk assessment criteria can be taught and applied in education and professional training. This will help learners understand not just whether something is wrong, but how and why it can skew judgement, equipping them to recognize and challenge it." 

Source:
Journal reference:

Ruani, A., et al. (2026). Development and validation of a tool for detecting misinformation risk in diet, nutrition, and health content (Diet-MisRAT). Scientific Reports. DOI: 10.1038/s41598-026-40534-2. https://www.nature.com/articles/s41598-026-40534-2

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Could a simple milk habit help prevent strokes? New research points to potential benefits