Scientists use facial recognition technology to develop tool for monitoring ICU patient's safety

NewsGuard 100/100 Score

A team of Japanese scientists has used facial recognition technology to develop an automated system that can predict when patients in the intensive care unit (ICU) are at high risk of unsafe behavior such as accidentally removing their breathing tube, with moderate (75%) accuracy.

The new research, being presented at this year's Euroanaesthesia congress (the annual meeting of the European Society of Anaesthesiology) in Vienna, Austria (1-3 June), suggests that the automated risk detection tool has the potential as a continuous monitor of patient's safety and could remove some of the limitations associated with limited staff capacity that make it difficult to continuously observe critically-ill patients at the bedside.

Using images we had taken of a patient's face and eyes we were able to train computer systems to recognize high-risk arm movement."

Dr Akane Sato from Yokohama City University Hospital, Japan who led the research

"We were surprised about the high degree of accuracy that we achieved, which shows that this new technology has the potential to be a useful tool for improving patient safety, and is the first step for a smart ICU which is planned in our hospital."

Critically ill patients are routinely sedated in the ICU to prevent pain and anxiety, permit invasive procedures, and improve patient safety. Nevertheless, providing patients with an optimal level of sedation is challenging. Patients who are inadequately sedated are more likely to display high-risk behavior such as accidentally removing invasive devices.

The study included 24 postoperative patients (average age 67 years) who were admitted to ICU in Yokohama City University Hospital between June and October 2018.

The proof-of-concept model was created using pictures taken by a camera mounted on the ceiling above patients' beds. Around 300 hours of data were analyzed to find daytime images of patients facing the camera in a good body position that showed their face and eyes clearly.

In total, 99 images were subject to machine learning--an algorithm that can analyze specific images based on input data, in a process that resembles the way a human brain learns new information. Ultimately, the model was able to alert against high-risk behavior, especially around the subject's face with high accuracy.

Various situations can put patients at risk, so our next step is to include additional high-risk situations in our analysis, and to develop an alert function to warn healthcare professionals of risky behavior. Our end goal is to combine various sensing data such as vital signs with our images to develop a fully automated risk prediction system."

Dr Akane Sato

The authors note several limitations including that more images of patients in different positions are needed to improve the generalisability of the tool in real life. They also note that monitoring of the patient's consciousness may improve the accuracy in distinguishing between high-risk behavior and voluntary movement.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New study reveals lifestyle factors boosting IVF success