Why not to (over)emphasize statistical significance

in European Journal of Endocrinology
Correspondence should be addressed to O M Dekkers; Email: o.m.dekkers@lumc.nl
Restricted access

P values should not merely be used to categorize results into significant and non-significant. This practice disregards clinical relevance, confounds non-significance with no effect and underestimates the likelihood of false-positive results. Better than to use the P value as a dichotomizing instrument, the P values and the confidence intervals around effect estimates can be used to put research findings in a context, thereby taking clinical relevance but also uncertainty genuinely into account.

 

     European Society of Endocrinology

Related Articles

Article Information

Metrics

All Time Past Year Past 30 Days
Abstract Views 468 468 185
Full Text Views 93 93 45
PDF Downloads 76 76 42

Altmetrics

Figures

  • View in gallery

    Graphical representation of effects of three hypothetical studies.

References

  • 1

    AmrheinVGreenlandSMcShaneB. Scientists rise up against statistical significance. Nature 2019 567 . (https://doi.org/10.1038/d41586-019-00857-9)

  • 2

    SterneJADavey SmithG. Sifting the evidence-what’s wrong with significance tests? BMJ 2001 322 . (https://doi.org/10.1136/bmj.322.7280.226)

  • 3

    AltmanDGBlandJM. Absence of evidence is not evidence of absence. BMJ 1995 311 485. (https://doi.org/10.1136/bmj.311.7003.485)

  • 4

    RosendaalFR. The p-value: a Clinician’s disease? European Journal of Internal Medicine 2016 35 . (https://doi.org/10.1016/j.ejim.2016.08.015)

  • 5

    IoannidisJP. Why most published research findings are false. PLoS Medicine 2005 2 e124. (https://doi.org/10.1371/journal.pmed.0020124)

PubMed

Google Scholar