Why a major hydroxychloroquine study was retracted over statistical misuse

A high-profile HCQ study that claimed 16,990 COVID deaths has been debunked for misusing data and ignoring dose effects, highlighting why scientific publishing needs a transparency overhaul.

Study: Hydroxychloroquine use during the first COVID-19 wave: a case study highlighting the urgent need to enhance research practices within the publication ecosystem. Image Credit: SNEHIT PHOTO /  ShutterstockStudy: Hydroxychloroquine use during the first COVID-19 wave: a case study highlighting the urgent need to enhance research practices within the publication ecosystem. Image Credit: SNEHIT PHOTO /  Shutterstock

In a recent study in the journal Archives of Public Health, researchers reexamined the methodological practices and outcomes of Pradelle et al.'s widely publicized yet now retracted hydroxychloroquine (HCQ) study. This study fueled the debate on the use of antirheumatic drugs by claiming that HCQ was associated with more than 16,990 deaths during the first wave of the COVID-19 pandemic. Subsequent critiques demonstrated substantial methodological and interpretative flaws in Pradelle et al.'s work, leading to its eventual retraction. The process, however, lacked transparency, as detailed explanations for the retraction and related correspondence were not made public.

The present study critiques the methodological claims and approach of Pradelle et al.'s dataset, highlighting its significant methodological flaws and using these findings to demonstrate ongoing challenges in scientific publishing.

Background – The HCQ Debate

The unprecedented growth in global internet access has facilitated the widespread distribution of scientific findings through online social networks and media platforms, oftentimes shaping public opinion, individual behaviors, and, in turn, policy decisions. This imposes an implicit responsibility on scientists to maintain the highest standards of rigor in their methodological approaches. Unfortunately, despite this, more than 10,000 publications are retracted each year following critiques of their data reliability and accuracy.

Not only do these retractions represent a substantial loss of funding and research effort, but their erroneous findings, once disseminated, can be challenging to reverse. The present study leverages the 'Lancet Gate' debate to highlight this point. The discussion centers around a publication in The Lancet concerning hydroxychloroquine (HCQ), an antimalarial drug that was being tested for use against coronavirus disease 2019 (COVID-19) spread. While widespread scientific outcry led to its retraction, several governments cited its findings in shaping their public policy on HCQ use.

The debate escalated to new extents when, in January 2024, Pradelle et al. published meta-analyses estimating the death toll of HCQ's compassionate use during COVID-19's first wave. The study, which claimed that 16,990 individuals potentially died following HCQ consumption, comprised Belgium, France, the USA, Spain, Turkey, and Italy, and was met with both widespread media coverage and policy impact. While the publication was also eventually retracted for "lack of sufficiently reliable data" and "questionable assumptions," the damage was done.

"The aim of this article is to address the significant concerns surrounding the transparency and integrity of scientific publishing, particularly in the context of the retracted article by Pradelle et al. and the connected papers, and to point out weaknesses of the current publication ecosystem to prevent misinformation and maintain public trust in scientific institutions."

Methodological flaws

The first methodological flaw explored in this critique is that of Pradelle et al.'s estimated in-hospital mortality. While the publication estimated that more than 16,990 individuals may have died from compassionate HCQ use, these findings were presented without corresponding sensitivity analyses or dose-subgroup corrections, preventing data reliability. Even more erroneously, the estimator for HCQ-related mortality (odds ratio [ORs]) used in Pradelle et al.'s publication was borrowed from Axfors et al.'s prior meta-analyses and was mainly derived from high-dose randomized controlled trials, but Pradelle et al. applied this same effect size to all patient groups regardless of the actual dose received, without accounting for the dose-dependency of the effect size or conducting robustness checks on its validity.

The present critique further addresses the importance of distinguishing between statistical and clinical significance. It underscores the misapplication of effect sizes, a lack of sensitivity analyses, and the absence of subgroup estimations as cumulative factors invalidating the real-world clinical reliability of Pradelle et al.'s findings.

The reanalysis found that lower-dose HCQ regimens showed no clear evidence of increased mortality, while only higher doses were associated with a possible increase in risk. Importantly, sensitivity analyses revealed that the statistical conclusions depended heavily on a single large study, raising concerns about the robustness of the original findings.

"As seen in multiple countries during the COVID-19 pandemic, the use of HCQ was highly variable in terms of dosage, patient selection, and co-administration with other treatments, factors that were inadequately accounted for in the meta-analysis driving the conclusions of Pradelle et al.. Therefore, the reported statistical associations do not necessarily reflect the true benefit or harm of HCQ in clinical practice, further reinforcing the need for rigorous methodological standards and cautious interpretation of statistical findings in shaping public health policies."

These findings reinforce the need for authors to take responsibility for critically evaluating their data sources and the assumptions embedded in their statistical models. Statistical methodology must experience greater transparency before science and medicine can further progress, and the spread of misinformation can be halted.

Beyond methodological critique, the study highlights broader systemic issues in scientific publishing, including the rise of fraudulent publishing practices, reviewer fatigue, predatory journals, "paper mills," and the erosion of trust in scientific institutions.

Future recommendations

To counter ongoing threats to scientific integrity, the study makes recommendations centered around reproducibility, the erosion of peer-reviewed processes, and urgent needs for their reform and increasing transparency and accountability for peer-reviewed science. It underscores the potential of open science practices in developing effective solutions, especially those of transparency and accountability.

"Platforms such as the Open Science Framework (OSF), Zenodo, Dryad, and Figshare are examples of robust infrastructures that ensure scientific material remains available for scrutiny, reanalysis, and further research.

Open peer review models, where reviewer reports and identities are disclosed, could also improve the quality of evaluations and develop a more constructive and accountable review process."

The article further recommends incentives for peer reviewers, such as Continuing Medical Education (CME) credits, public acknowledgment, and opportunities for professional advancement, as well as the adoption of open data and code sharing to improve reproducibility.

These reforms and others are crucial in fostering reviewer participation, improving the strictness standards of the peer reviewing process, and enhancing overall transparency for a safer and healthier tomorrow.

Journal reference:
Hugo Francisco de Souza

Written by

Hugo Francisco de Souza

Hugo Francisco de Souza is a scientific writer based in Bangalore, Karnataka, India. His academic passions lie in biogeography, evolutionary biology, and herpetology. He is currently pursuing his Ph.D. from the Centre for Ecological Sciences, Indian Institute of Science, where he studies the origins, dispersal, and speciation of wetland-associated snakes. Hugo has received, amongst others, the DST-INSPIRE fellowship for his doctoral research and the Gold Medal from Pondicherry University for academic excellence during his Masters. His research has been published in high-impact peer-reviewed journals, including PLOS Neglected Tropical Diseases and Systematic Biology. When not working or writing, Hugo can be found consuming copious amounts of anime and manga, composing and making music with his bass guitar, shredding trails on his MTB, playing video games (he prefers the term ‘gaming’), or tinkering with all things tech.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Francisco de Souza, Hugo. (2025, May 06). Why a major hydroxychloroquine study was retracted over statistical misuse. News-Medical. Retrieved on May 07, 2025 from https://www.news-medical.net/news/20250506/Why-a-major-hydroxychloroquine-study-was-retracted-over-statistical-misuse.aspx.

  • MLA

    Francisco de Souza, Hugo. "Why a major hydroxychloroquine study was retracted over statistical misuse". News-Medical. 07 May 2025. <https://www.news-medical.net/news/20250506/Why-a-major-hydroxychloroquine-study-was-retracted-over-statistical-misuse.aspx>.

  • Chicago

    Francisco de Souza, Hugo. "Why a major hydroxychloroquine study was retracted over statistical misuse". News-Medical. https://www.news-medical.net/news/20250506/Why-a-major-hydroxychloroquine-study-was-retracted-over-statistical-misuse.aspx. (accessed May 07, 2025).

  • Harvard

    Francisco de Souza, Hugo. 2025. Why a major hydroxychloroquine study was retracted over statistical misuse. News-Medical, viewed 07 May 2025, https://www.news-medical.net/news/20250506/Why-a-major-hydroxychloroquine-study-was-retracted-over-statistical-misuse.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.