Clinicians more likely to express doubt in medical records of Black patients

Clinicians are more likely to indicate doubt or disbelief in the medical records of Black patients than in those of White patients-a pattern that could contribute to ongoing racial disparities in healthcare. That is the conclusion of a new study, analyzing more than 13 million clinical notes, publishing August 13, 2025 in the open-access journal PLOS One by Mary Catherine Beach of Johns Hopkins University, U.S.

There is mounting evidence that electronic health records (EHR) contain language reflecting the unconscious biases of clinicians, and that this language may undermine the quality of care that patients receive.

In the new study, researchers analyzed 13,065,081 EHR notes written between 2016 and 2023 about 1,537,587 patients by 12,027 clinicians at a large health system in the mid-Atlantic United States. They used artificial intelligence (AI) tools to find which notes had language suggesting the clinician doubted the sincerity or narrative competence of the patient-for example stating that the patient "claims," "insists," or is "adamant about" their symptoms, or is a "poor historian." 

Overall, fewer than 1% (n=106,523; 0.82%) of the medical notes contained language undermining patient credibility – about half of which undermined sincerity (n=62,480; 0.48%) and half undermined competence (n=52,243; 0.40%). However, notes written about non-Hispanic Black patients, compared to those written about White patients, had higher odds of containing terms undermining the patients' credibility (aOR 1.29, 95% CI 1.27–1.32), sincerity (aOR 1.16; 95% CI 1.14–1.19) or competence (aOR 1.50; 95% 1.47–1.54). Moreover, notes written about Black patients were less likely to have language supporting credibility (aOR 0.82; 95% CI 0.79–0.85) than those written about White or Asian patients.

The study was limited by the fact that it used only one health system and did not examine the influence of clinician characteristics such as race, age or gender. Additionally, as the utilized NLP models had high, but not perfect, accuracy in detecting credibility-related language, they may have misclassified some notes and thereby under- or overestimated the prevalence of credibility-related language. 

Still, the authors conclude that clinician documentation undermining patient credibility may disproportionately stigmatize Black individuals, and that the findings likely represent "the tip of an iceberg." They say that medical training should help future clinicians become more aware of unconscious biases, and that AI tools used to help write medical notes should be programmed to avoid biased language. 

The authors add: "For years, many patients – particularly Black patients – have felt their concerns were dismissed by health professionals. By isolating words and phrases suggesting that a patient may not be believed or taken seriously, we hope to raise awareness of this type of credibility bias with the ultimate goal of eliminating it."

Source:
Journal reference:

Beach, M. C., et al. (2025) Racial bias in clinician assessment of patient credibility: Evidence from electronic health records. PLoS One. doi.org/10.1371/journal.pone.0328134/

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Eating habits directly influence vaginal microbiome, research finds