New ethical framework developed to tackle ethical issues in clinical AI data sharing

NewsGuard 100/100 Score

Data protection is an extremely topical issue in an increasingly connected digital world, and while medicine adopts and develops more beneficial digital tools for research and development, more questions surrounding the safety of patient data arise.

Clinical dataImage Credits: PopTika / Shutterstock.com

A special report published in Radiology has stated that clinical data should be available to use for research and development and other secondary purposes like in the development of artificial intelligence algorithms.

Artificial intelligence has the potential to significantly accelerate medical imaging analysis, but in order to teach the technology about the conditions it is being used to identify, it must be exposed to huge amounts of data from medical examinations and images such as mammograms and CT scans, among many others. This raises important issues concerning the ethical framework that will safeguard patient data when it is shared.

Dr. David B. Larson, MD, MBA, from the Stanford University School of Medicine in Stanford, California, led the study, and explained that “clinical data should be made available to researchers and developers after it has been aggregated and all patient identifiers have been removed,” but that “all who interact with such data should be held to high ethical standards, including protecting patient privacy and not selling clinical data.”

Previously, debate over the sharing of clinical data has focused on ownership, with options being that either the patient owns their medical data, or the medical institution in which the data was generated owns the data. However, Dr. Larson and his colleagues have devised a third option that does not assign ownership to the data at all when data is used for secondary purposes.

Dr. Larson and the research team at Stanford University developed a framework specifically for the sharing and use of clinical data in AI technology development. Larson acknowledges the fact that access to digital clinical data and processing tools can “dramatically accelerate our ability to gain understanding and develop new applications that can benefit patients and populations,” but questions that data’s ethical usage “often preclude the sharing of that information.”

He continues:

“Medical data, which are simply recorded observations, are acquired for the purposes of providing patient care. When that care is provided, that purpose is fulfilled, so we need to find another way to think about how these recorded observations should be used for other purposes.

“We believe that patients, provider organizations, and algorithm developers all have ethical obligations to help ensure that these observations are used to benefit future patients, recognizing that protecting patient privacy is paramount.”

Larson’s framework would support the release of de-identified and aggregated data for research and development. However, those using the data would have to identify themselves and adhere to strict ethical practice. The framework would not require patient consent as they would not always be able to choose not to share their data for AI development, but their privacy would have to be protected.

The article states that when data is used in this way, it is not the data itself, but the “underlying physical properties, phenomena and behaviors that they present, that are of primary interest.”

The authors believe that is in patients’ interest for researchers to be able to look at their clinical data in order to gain deeper insight into anatomy, physiology, and disease progressions, but only if they are not able to identify any patients while doing this.

Selling data is not permitted for clinical providers with Larson’s framework, but corporate organizations could profit from AI algorithms made as a result of the clinical data as long as the profit comes not from the sale of data, but the technology or activities developed as a result of the data. Provider organizations would be able to share clinical data with partners who provide financial support to further their research if their support is solely for their research purposes and not access to data.

Larson said, “We strongly emphasize that protection of patient privacy is paramount. The data must be de-identified. In fact, those who receive the data must not make any attempts to re-identify patients through identifying technology.”

Patient privacy would be achieved by eliminating all identifying information from the data. If identifying features were visible in imaging scans, anyone using those images would need to notify the organization sharing the images to effectively discard the data. This, as Larson stated, would “extend the ethical obligations of provider organizations to all who interact with the data.”

We hope this framework will contribute to more productive dialogue, both in the field of medicine and computer science, as well as with policymakers, as we work to thoughtfully translate ethical considerations into regulatory and legal requirements.”

Dr. David B. Larson, Stanford University School of Medicine

The framework developed by Larson and his colleagues will be put into the public domain to allow other organizations and individuals to consider its potential as they work to answer some of the pressing questions around patient privacy and data protection in clinical AI technology and data sharing.

Source:

Researchers unveil framework for sharing clinical data in AI era. Eurekalert. Available from: https://www.eurekalert.org/emb_releases/2020-03/rson-ruf031720.php

Journal references:

Larson, D.B. et al. (2020). Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework. Radiology. DOI: https://doi.org/10.1148/radiol.2020192536

Langlotz, C.P. et al. (2019). A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology. DOI: https://doi.org/10.1148/radiol.2019190613

Lois Zoppi

Written by

Lois Zoppi

Lois is a freelance copywriter based in the UK. She graduated from the University of Sussex with a BA in Media Practice, having specialized in screenwriting. She maintains a focus on anxiety disorders and depression and aims to explore other areas of mental health including dissociative disorders such as maladaptive daydreaming.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Zoppi, Lois. (2020, March 24). New ethical framework developed to tackle ethical issues in clinical AI data sharing. News-Medical. Retrieved on April 23, 2024 from https://www.news-medical.net/news/20200324/New-ethical-framework-developed-to-tackle-ethical-issues-in-clinical-AI-data-sharing.aspx.

  • MLA

    Zoppi, Lois. "New ethical framework developed to tackle ethical issues in clinical AI data sharing". News-Medical. 23 April 2024. <https://www.news-medical.net/news/20200324/New-ethical-framework-developed-to-tackle-ethical-issues-in-clinical-AI-data-sharing.aspx>.

  • Chicago

    Zoppi, Lois. "New ethical framework developed to tackle ethical issues in clinical AI data sharing". News-Medical. https://www.news-medical.net/news/20200324/New-ethical-framework-developed-to-tackle-ethical-issues-in-clinical-AI-data-sharing.aspx. (accessed April 23, 2024).

  • Harvard

    Zoppi, Lois. 2020. New ethical framework developed to tackle ethical issues in clinical AI data sharing. News-Medical, viewed 23 April 2024, https://www.news-medical.net/news/20200324/New-ethical-framework-developed-to-tackle-ethical-issues-in-clinical-AI-data-sharing.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Researchers leverage machine-learning techniques to predict future risk of pressure injuries