The Bioethics of AI in the Healthcare Industry

Thought LeadersHugh WhittallDirector The Nuffield Council on Bioethics
An interview with Hugh Whittall, Director of the Nuffield Council on Bioethics, conducted by Kate Anderton, BSc

What is artificial intelligence (AI)?

There is no universally agreed definition of AI. Broadly speaking, AI tends to refer to computing technologies that replicate or resemble processes and tasks associated with human intelligence, such as reasoning, sensory understanding, and interaction.

Image Credit: sdecoret / Shutterstock

AI technologies work in different ways, but most use large quantities of data to produce an output. For example, machine learning, a type of AI that has been particularly successful in recent years, works by learning and deriving its own rules from data and experience.

How is AI being used to advance the healthcare industry?

At the moment, most health-related applications of AI are at the research or early trial stage and it is not yet clear how successful they will be in wider healthcare systems. In a recent briefing note, we highlighted several areas of clinical care where AI is thought to have strong potential, such as the analysis of medical images and scans for early signs of disease, or monitoring of patients’ vital signs for indications of deterioration.

Some healthcare providers are also testing AI systems to assist with administrative tasks such as scheduling, and as a first point of contact for health information and triage. There is hope that AI could help address challenges associated with the ‘care gap’ and ageing populations, and could assist people with chronic disease, disability, and frailty in the home.

However, there are both practical and ethical questions about how these can and will work, including how to ensure privacy for users, and mitigate for the potential loss of dignity and human contact if technologies are used to replace carers.

What are the limitations of AI and how might this affect the healthcare industry?

Most AI depends on large quantities of good quality digital data. Therefore, it will be essential to have an open and public discussion about what uses of data, especially data relating to health, are acceptable and trustworthy to people.

In the UK healthcare system, medical records are not yet fully digitized, and different systems and standards are used for data entry and storage, so this is also a potential obstacle for AI. A related challenge is that biases in the data used to ‘train’ AI can be reflected in their outputs, and many have voiced concerns about the possibility of error, discrimination and inequality of access to the benefits of AI in healthcare.

Image Credit: Zapp2Photo / Shutterstock

Clinical practice and care often involves complex judgments and abilities that AI is currently unable to replicate, such as contextual knowledge and the ability to read social cues, as well as genuine human compassion. These limitations have led many to conclude that at least in the short- term, AI should assist and complement, rather than replace, human roles and decision-making in healthcare.

Please describe the recent briefing note produced by the Nuffield Council on Bioethics.

Artificial intelligence (AI) in healthcare and research is the third in a new series of bioethics briefing notes by the Nuffield Council on Bioethics. Our briefing notes provide short, accessible summaries of particular medical or scientific developments, and the ethical and social issues arising from them.

Previous briefing notes have considered research into the search for a treatment for ageing, and whole genome sequencing of babies.

Which areas of development are the biggest cause for concern for the Bioethics Council and why?

The remit of the Nuffield Council on Bioethics is to identify and define ethical questions raised by recent developments in biological and medical research that concern, or are likely to concern, the public interest.

The possibilities of AI in healthcare and research have generated much hope and excitement but also significant concerns and questions. Some of these are not new; the Council has a long standing interest in the use of data that individuals might consider to be sensitive and private, and around the use of assistive technologies in healthcare.

But there are also issues unique to AI which have provoked much ethical and philosophical, as well as legal, debate. For example, the possibility that AI can assist or make decisions that have significant consequences for individuals raises questions about the distribution of responsibility and authority, and the role of (whose) moral values and principles in decision-making.

What challenges do you think the UK government will face when trying to regulate AI in healthcare?

AI has applications in fields that are usually subject to regulation and guidelines, such as personal data, research, and healthcare. However, these established frameworks could be challenged by the fast-moving and entrepreneurial way AI is being developed and taken up.

Questions for governments include whether AI should be regulated as a distinct area, or whether existing regulations should be reviewed with the possible impact of AI in mind. There are tensions around the use of data that have been collected in a healthcare context, particularly when partnerships are struck between health providers and private companies.

Finding a way to encourage innovation while maintaining people’s trust in healthcare systems will be important to many governments, as will the need to ensure that the wider uses of AI are transparent, accountable, and compatible with public interests and expectations.

What can doctors and researchers do to ensure they remain ethical in the advent of AI in healthcare?

At this stage, it will be important for those working in areas where AI is being developed to be sensitive to its implications, by being attentive to and participating in wider discussions about how AI can support, rather than undermine, professional standards and public values.

What is the future for AI in healthcare?

As is often the case with new technologies, it is uncertain how AI will develop and be taken up in the future. Both the technology and the social context in which it emerges can change, interact and be subject to other influences.

While some claim that AI will revolutionize healthcare, others predict that it will soon fizzle out or be surpassed by other technologies. While some think AI will be able to develop ‘general’ intelligence similar to that of humans, applications currently being trialed in healthcare focus on narrowly defined tasks including diagnostics, health administration, and supporting patients with personalized health information.

Given the wide public interest, the level of public private investment, and the potential of these technologies, it is important that we have the debate now around the ethical issues they raise.

Where can readers find more information?

About Hugh Whittall

Hugh is the Director of the Nuffield Council on Bioethics. In this role, he oversees all areas of the Council’s work and contributes to its long-term strategy. Before accepting this position in 2007, he held senior positions at the Department of Health, the Human Fertilisation and Embryology Authority, and the European Commission.

 

Advertisement

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News-Medical.Net.
Post a new comment
Post