AI system spots Parkinson’s signs in voice, walking and drawings

By merging voice instability, gait asymmetry, and tremor-driven handwriting changes into a single explainable AI framework, researchers show how digital biomarkers can move Parkinson’s detection closer to reliable real-world screening.

Doctor going through results on computer tablet with older adult patientStudy: Explainable multimodal feature fusion networks for Parkinson's disease prediction. Image credit: goodluz/Shutterstock.com

Recent advances in computing, especially the use of artificial intelligence, hold promise for increased accuracy and efficiency of medical diagnosis. A recent study published in the journal Frontiers in Digital Health presents a deep learning approach that uses multiple modalities of input data to improve the detection of Parkinson’s disease.

Digital biomarkers aim to catch early Parkinson's Disease

Parkinson’s disease (PD) is a progressive neurodegenerative disorder. It manifests as motor impairments, including tremor, rigidity, gait abnormalities, handwriting difficulties, and slowed movement. It also presents with impaired cognition, problems with speech, and sleep issues. PD diagnosis is primarily clinical, based on neurological examination. The subjective nature of this process may increase the risk of misdiagnosis or of missed diagnosis, especially in early disease.

Artificial intelligence (AI) can help overcome these limitations by analyzing handwriting, gait, and speech for telltale signs of early dysfunction. These objectively measured digital biomarkers can help detect PD at early stages. AI-driven speech analysis has achieved up to 99 % accuracy in controlled datasets. Similarly, gait-based analytics can discriminate between PD patients and healthy controls with up to 97 % accuracy. Handwriting analysis has also achieved nearly 98 % accuracy.

Despite this, each of these has significant problems when applied to the clinical context. For instance, speech analysis may be confounded by differences in accent, language, or background noise. Similar quality issues plague gait-based and handwriting-based detection systems. The former relies heavily on the proper use of high-quality sensors, whereas handwriting analysis is often based on experiments performed in controlled rather than real-world conditions. Thus, these unimodal systems are poorly generalizable and cannot be easily scaled.

AI models are also often poorly interpretable; they offer predictions but do not explain the reasoning that drives how and why decisions are made. This has led to the introduction of explainability mechanisms, exemplified in this case by SHapley Additive exPlanations (SHAP), Gradient-weighted Class Activation Mapping (Grad-CAM), and Integrated Gradients (IG). When integrated into PD models, these allow clinicians to understand which attributes influenced the decision-making process. Their relatively limited use has slowed the growth of clinical support for AI-based detection systems.

The current study sought to overcome these obstacles by using a multimodal deep learning framework that incorporates three modalities: gait, handwriting, and speech. This approach integrates complementary findings from multiple modalities, representative of the wide range of PD clinical features, into a single prediction. If one modality is unreliable or noisy, the other two may help strengthen overall classification performance.

Even so, explainability has lagged behind in multimodal frameworks, making them unpopular in clinical practice. In view of this gap, the researchers present a static early-feature fusion system. The model combines modality-specific features via feature concatenation, followed by XGBoost classification, thereby optimizing overall prediction performance. In addition, the model includes SHAP, Grad-CAM, and Integrated Gradients to ensure interpretability.

Inside the trimodal early fusion architecture

In this model, deep neural networks were used to process individual modalities via dedicated feature-extraction pipelines. For speech, log-Mel spectrograms were analyzed using EfficientNet-B0; for gait, temporal convolutional networks and autoencoders were used to extract vertical ground reaction force features; and for handwriting, spiral drawings were processed using ResNet-50. This was followed by static feature concatenation and classification with an XGBoost model. Explainable AI techniques were employed to make the model interpretable at both modality and feature levels.

For speech analysis, log-Mel spectrogram representations were used to capture vocal instability, pitch variation, and spectral features associated with PD. Using multiple vocal parameters improved prediction performance. Similarly, wearable sensor–derived gait signals, specifically vertical ground reaction force data from a public PhysioNet dataset, were analyzed to capture stride irregularities, asymmetry, and temporal instability.

For handwriting analysis, digitized spiral drawings were used to detect tremor-induced deviations, curvature changes, and micrographia. Grad-CAM visualizations highlighted regions of the spiral most influential in classification decisions.

Importantly, unlike several studies cited in the literature review, this framework did not incorporate cerebrospinal fluid biomarkers, neuroimaging, olfactory testing, sleep data, facial movement analysis, or finger-tapping assessments. The proposed system relied exclusively on speech, gait, and handwriting datasets.

Benchmark datasets validate multimodal performance

The system was evaluated using publicly available benchmark datasets: a spiral handwriting dataset (3,264 samples), the MDVR-KCL speech dataset (approximately 73 subjects), and the GAITPDB gait dataset (approximately 168 subjects). Fivefold stratified cross-validation was employed to ensure robust evaluation.

The trimodal fusion model achieved an accuracy of 92 %, outperforming unimodal handwriting (91 %), gait (90 %), and speech (74 %) models. It achieved a macro F1-score of 0.89, an area under the ROC curve (AUC) of 0.95, and an average precision of 0.96, with balanced sensitivity and specificity of approximately 90 % and 89 %, respectively.

In simpler terms, the combined model correctly classified roughly nine out of ten cases while maintaining a good balance between identifying people with Parkinson’s disease and avoiding false alarms.

Bootstrapped confidence intervals further supported the statistical robustness of these results. External validation experiments demonstrated similar classification patterns, although with slight performance variation attributable to dataset differences.

The model performed better than unimodal systems and provided an interpretable AI-assisted framework. However, the fusion mechanism involved static concatenation rather than adaptive or reliability-based dynamic weighting, and the study did not experimentally simulate missing-modality scenarios. The authors also emphasize that while multimodal fusion improved robustness, performance was evaluated retrospectively on benchmark datasets rather than in prospective clinical trials.

Explainable AI strengthens Parkinson’s screening potential

The study presents a diagnostic system based on multimodal feature fusion modeling, using AI, demonstrating solid performance and interpretability on benchmark datasets.

However, the authors acknowledge important limitations. The framework has not yet undergone prospective clinical validation, was evaluated only for binary classification (PD versus healthy controls), and did not include clinician-guided assessment of the explainability of its outputs. Additionally, modality-specific generalizability challenges remain, particularly for speech and gait data collected under different real-world conditions.

Future studies should involve neurologists and longitudinal analyses to establish the clinical validity of this framework, build trust, and ensure regulatory readiness. Lighter, deployment-oriented versions of the model, along with more adaptive multimodal fusion strategies, may further enhance real-world applicability.

Download your PDF copy by clicking here.

Journal reference:
Dr. Liji Thomas

Written by

Dr. Liji Thomas

Dr. Liji Thomas is an OB-GYN, who graduated from the Government Medical College, University of Calicut, Kerala, in 2001. Liji practiced as a full-time consultant in obstetrics/gynecology in a private hospital for a few years following her graduation. She has counseled hundreds of patients facing issues from pregnancy-related problems and infertility, and has been in charge of over 2,000 deliveries, striving always to achieve a normal delivery rather than operative.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Thomas, Liji. (2026, March 03). AI system spots Parkinson’s signs in voice, walking and drawings. News-Medical. Retrieved on March 03, 2026 from https://www.news-medical.net/news/20260303/AI-system-spots-Parkinsone28099s-signs-in-voice-walking-and-drawings.aspx.

  • MLA

    Thomas, Liji. "AI system spots Parkinson’s signs in voice, walking and drawings". News-Medical. 03 March 2026. <https://www.news-medical.net/news/20260303/AI-system-spots-Parkinsone28099s-signs-in-voice-walking-and-drawings.aspx>.

  • Chicago

    Thomas, Liji. "AI system spots Parkinson’s signs in voice, walking and drawings". News-Medical. https://www.news-medical.net/news/20260303/AI-system-spots-Parkinsone28099s-signs-in-voice-walking-and-drawings.aspx. (accessed March 03, 2026).

  • Harvard

    Thomas, Liji. 2026. AI system spots Parkinson’s signs in voice, walking and drawings. News-Medical, viewed 03 March 2026, https://www.news-medical.net/news/20260303/AI-system-spots-Parkinsone28099s-signs-in-voice-walking-and-drawings.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Daily walking patterns help flag early Parkinson’s signs in large UK study