Research is usually denoted to be effective with the terms “statistically significant”. If a significant difference is observed, the difference between groups is pronounced.
Image Credit: Yurich / Shutterstock
Researchers are now calling for a ban on the concept of statistical significance saying that any difference should be considered significant. For example studies a couple of years back show that consuming eggs or cholesterol in diet is harmless to the heart. A more recent study however shows that it is statistically proven that consuming two or more eggs per week raise the risk of early death due to heart disease. Which one of the studies should be believed? Statistics, say experts, make or break a hypothesis based on significance.
Nicole Lazar, a professor of statistics at the University of Georgia, one of the experts on the panel that is calling for the ban says, “The world is much more uncertain than that.” Lazar co-authored a 17,500 word editorial in the latest issue of the journal The American Statistician where 43 articles were published on the same issue. Yet another commentary appeared this week in the journal Nature along the same lines. This commentary had contributions from over 850 researchers who said that the significance should be done away with.
The test of significance and its concept was developed in the early 20th century by pioneering statistician R.A. Fisher who described the p-value whose value determined statistical significance. Over the years that followed the p value threshold became fixed at 0.05, more than which a test result was deemed non-significant. It was this significance, explain the experts, that determined if a scientist’s work is accepted for publication and he or she receives research funding and grants.
Ron Wasserstein, the executive director of the American Statistical Association has been pro this movement for a while now and he said, “It's really gotten stretched all out of proportion... Failure to make these changes are really now starting to have a sustained negative impact on how science is conducted. It's time to start making the changes. It's time to move on.” He explained that many interesting research work gets rejected at the journal’s editorial desk because of “non-significant” results.
Blake McShane, a statistician at the Kellogg School of Management at Northwestern University, and co-author of the Nature commentary, said, “All statistics naturally bounce around quite a lot from study to study to study.” This is due to intra-group variations. He said this is the reason why some studies cannot be replicated in other settings. He explained that significance is an easy method by which the results can be explained and understood.
The authors conclude with an advice to researchers saying, “...be sceptical, pick a good question, and try to answer it in many ways. It takes many numbers to get close to the truth.”