Many people perceive the use of facial analytics in healthcare settings as a potential privacy threat

NewsGuard 100/100 Score

Uses of facial images and facial recognition technologies – to unlock a phone or in airport security – are becoming increasingly common in everyday life. But how do people feel about using such data in healthcare and biomedical research?

Through surveying over 4,000 US adults, researchers found that a significant proportion of respondents considered the use of facial image data in healthcare across eight varying scenarios as unacceptable (15-25 percent). Taken with those that responded as unsure of whether the uses were acceptable, roughly 30-50 percent of respondents indicated some degree of concern for uses of facial recognition technologies in healthcare scenarios.

Whereas using facial image data in some cases – such as to avoid medical errors, for diagnosis and screening, or for security – was acceptable to the majority, more than half of respondents did not accept or were uncertain about healthcare providers using this data to monitor patients' emotions or symptoms, or for health research.

In the biomedical research setting, most respondents were equally worried about the use of medical records, DNA data and facial image data in a study.

While respondents were a diverse group in terms of age, geographic region, gender, racial and ethnic background, educational attainment, household income, and political views, their perspectives on these issues did not differ by demographics. Findings were published in the journal PLOS ONE.

Our results show that a large segment of the public perceives a potential privacy threat when it comes to using facial image data in healthcare. To ensure public trust, we need to consider greater protections for personal information in healthcare settings, whether it relates to medical records, DNA data, or facial images. As facial recognition technologies become more common, we need to be prepared to explain how patient and participant data will be kept confidential and secure."

Sara Katsanis, Study Lead Author and Head of Genetics and Justice Laboratory, Ann & Robert H. Lurie Children's Hospital of Chicago

Katsanis is also a research assistant professor of ediatrics at Northwestern University Feinberg School of Medicine

Senior author Jennifer K. Wagner, Assistant Professor of Law, Policy and Engineering in Penn State's School of Engineering Design, Technology, and Professional Programs adds: "Our study offers an important opportunity for those pursuing possible use of facial analytics in healthcare settings and biomedical research to think about human-centeredness in a more meaningful way. The research that we are doing hopefully will help decisionmakers find ways to facilitate biomedical innovation in a thoughtful, responsible way that does not undermine public trust."

The research team, which includes co-authors with expertise in bioethics, law, genomics, facial analytics, and bioinformatics, hopes to conduct further research to understand the nuances where public trust is lacking.

Source:
Journal reference:

Katsanis, S. H., et al. (2021) A survey of U.S. public perspectives on facial recognition technology and facial imaging data practices in health and research contexts. PLOS One. doi.org/10.1371/journal.pone.0257923.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New study aims to reduce bias in AI health prediction models