Racial and other biases in AI algorithms for healthcare can be tackled with public support

Members of the public are being asked to help remove biases based on race and other disadvantaged groups in artificial intelligence algorithms for healthcare.

Health researchers are calling for support to address how 'minoritized' groups, who are actively disadvantaged by social constructs, would not see future benefits from the use of AI in healthcare. The team, led by the University of Birmingham and University Hospitals Birmingham write in Nature Medicine today on the launch of a consultation on a set of standards that they hope will reduce biases that are known to exist in AI algorithms.

There is growing evidence that some AI algorithms work less well for certain groups of people - particularly those in minoritized racial/ethnic groups. Some of this is caused by biases in the datasets used to develop AI algorithms. This means patients from Black and minoritized ethnic groups may receive inaccurate predictions, leading to misdiagnosis and the wrong treatments.

STANDING Together is an international collaboration which will develop best-practice standards for healthcare datasets used in Artificial Intelligence, ensuring they are diverse, inclusive, and don't leave underrepresented or minoritized groups behind. The project is funded by the NHS AI Lab, and The Health Foundation, and the funding is administered by the National Institute for Health and Care Research, the research partner of the NHS, public health and social care, as part of the NHS AI Lab's AI Ethics Initiative.

Dr Xiaoxuan Liu from the Institute of Inflammation and Ageing at the University of Birmingham and co-lead of the STANDING Together project said:

"By getting the data foundation right, STANDING Together ensures that 'no-one is left behind' as we seek to unlock the benefits of data-driven technologies like AI. We have opened our Delphi study to the public so we can maximize our reach to communities and individuals. This will help us ensure the recommendations made by STANDING Together truly represent what matters to our diverse community.

Professor Alastair Denniston, Consultant Ophthalmologist at University Hospitals Birmingham and Professor in the Institute of Inflammation and Ageing at the University of Birmingham is co-lead of the project. Professor Denniston said:

"As a doctor in the NHS, I welcome the arrival of AI technologies that can help us improve the healthcare we offer - diagnosis that is faster and more accurate, treatment that is increasingly personalized, and health interfaces that give greater control to the patient. But we also need to ensure that these technologies are inclusive. We need to make sure that they work effectively and safely for everybody who needs them."

This is one of the most rewarding projects I have worked on, because it incorporates not only my great interest in the use of accurate validated data and interest in good documentation to assist discovery, but also the pressing need to involve minority and underserved groups in research that benefits them. In the latter group of course, are women."

Jacqui Gath, Patient Partner, STANDING Together Project

The STANDING Together project is now open for public consultation, as part of a Delphi consensus study. The researchers are inviting members of the public, medical professionals, researchers, AI developers, data scientists, policy makers and regulators to help review these standards to ensure they work for you and anyone you collaborate with.

Source:
Journal reference:

Ganapathi, S., et al. (2022) Tackling bias in AI datasets through the STANDING together initiative. Nature Medicine. doi.org/10.1038/s41591-022-01987-w.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post
You might also like...
Study highlights the complexity of health-care costs for ADRD patients who develop delirium