The role of AI in medical decision-making elicits different reactions in people when compared with human doctors. A new study investigated the situations where the acceptance differs and why with stories that described medical cases.
People accept the euthanasia decisions made by robots and AI less than those made by human doctors, finds a new study. The international study, led by the University of Turku in Finland, investigated people's moral judgements on the decisions made by AI and robots as well as humans on end-of-life care regarding people in a coma. The research team conducted the study in Finland, Czechia, and Great Britain by telling the research subjects stories that described medical cases.
The project's Principal Investigator, University Lecturer Michael Laakasuo from the University of Turku, explains that the phenomenon where people hold some of the decisions made by AI and robots to a higher standard than similar decisions made by humans is called the Human-Robot moral judgement asymmetry effect.
However, it is still a scientific mystery in which decisions and situations the moral judgement asymmetry effect emerges. Our team studied various situational factors related to the emergence of this phenomenon and the acceptance of moral decisions."
Michael Laakasuo, University of Turku
Humans are perceived as more competent decision-makers
According to the research findings, the phenomenon where people were less likely to accept euthanasia decisions made by AI or a robot than by a human doctor occurred regardless of whether the machine was in an advisory role or the actual decision-maker. If the decision was to keep the life-support system on, there was no judgement asymmetry between the decisions made by humans and Ai. However, in general, the research subjects preferred the decisions where life support was turned off rather than kept on.
The difference in acceptance between human and AI decision-makers disappeared in situations where the patient, in the story told to the research subjects, was awake and requested euthanasia themselves, for example, by lethal injection.
The research team also found that the moral judgement asymmetry is at least partly caused by people regarding AI as less competent decision-maker than humans.
"AI's ability to explain and justify its decisions was seen as limited, which may help explain why people accept AI into clinical roles less."
Experiences with AI play an important role
According to Laakasuo, the findings suggest that patient autonomy is key when it comes to the application of AI in healthcare.
"Our research highlights the complex nature of moral judgements when considering AI decision-making in medical care. People perceive AI's involvement in decision-making very differently compared to when a human is in charge," he says.
"The implications of this research are significant as the role of AI in our society and medical care expands every day. It is important to understand the experiences and reactions of ordinary people so that future systems can be perceived as morally acceptable."
Source:
Journal reference:
Laakasuo, M., et al. (2025). Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions. Cognition. doi.org/10.1016/j.cognition.2025.106177.