Addressing inaccurate race and ethnicity data in medical AI

The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into healthcare. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias.

In a new publication in PLOS Digital Health, experts in bioethics and law call for immediate standardization of methods for collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for healthcare systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warrant the quality of their race and ethnicity data

Lead author Alexandra Tsalidis, MBE, notes that "If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools."

Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare. This article provides a concrete method that can be implemented to help address these concerns."

Francis Shen, JD, PhD, senior author 

While more work needs to be done, the article offers a starting point suggests co-author Lakshmi Bharadwaj, MBE. "An open dialogue regarding best practices is a vital step, and the approaches we suggest could generate significant improvements."

The research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program, and by an NIH BRAIN Neuroethics grant (R01MH134144).

Source:
Journal reference:

Tsalidis, A., Bharadwaj, L., & Shen, F. X. (2025). Standardization and accuracy of race and ethnicity data: Equity implications for medical AI. PLOS Digital Health. doi.org/10.1371/journal.pdig.0000807.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Telehealth in home healthcare faces setbacks amid lack of federal reimbursements