Open Science

An iconic finding in behavioral ecology fails to reproduce

[This post has been originally posted on ecoevotransparency.org] Just how reproducible are studies in ecology and evolutionary biology? We don’t know precisely, but a new case study in the journal Evolution shows that even textbook knowledge can be unreliable. Daiping Wang, Wolfgang Forstmeier, and co-authors have convinced me of the unreliability of an iconic finding in behavioral ecology, and I hope their results brings our field one step closer to a systematic assessment of reproducibility.

Continue reading

A conversation - Where do ecology and evolution stand in the broader ‘reproducibility crisis’ of science?

[This post has been originally posted on ecoevotransparency.org] In this post, I float some ideas that I’ve had about the ‘reproducibility crisis’ as it is emerging in ecology and evolutionary biology, and how this emergence may or may not differ from what is happening in other disciplines, in particular psychology. Two other experts on this topic (Fiona Fidler and David Mellor) respond to my ideas, and propose some different ideas as well.

Continue reading

Reproducibility Project - Ecology and Evolutionary Biology

[This post has been originally posted on ecoevotransparency.org] The problem As you probably already know, researchers in some fields are finding that it’s often not possible to reproduce others’ findings. Fields like psychology and cancer biology have undertaken large-scale coordinated projects aimed at determining how reproducible their research is. There has been no such attempt in ecology and evolutionary biology. A starting point Earlier this year Bruna, Chazdon, Errington and Nosek wrote an article citing the need to start this process by reproducing foundational studies.

Continue reading

Why ‘MORE’ published research findings are false

[This post has been originally posted on ecoevotransparency.org] In a classic article titled “Why most published research findings are false”, John Ioannidis explains 5 main reasons for just that. These reasons are largely related to large ‘false positive reporting probabilities’ (FPRP) in most studies, and ‘researcher degrees of freedom’, facilitating the practice as such ‘p-hacking’. If you aren’t familiar with these terms (FPRP, researcher degrees of freedom, and p-hacking), please read Tim Parker and his colleagues’ paper.

Continue reading