Reproducibility

Prior probability and reproducibility

[This post has been originally posted on ecoevotransparency.org]

Physicist Carl Sagan famously said “Extraordinary claims require extraordinary evidence.” I think its useful to extend this to the distinctly less elegant “surprising findings are less likely to be true, and thus require a higher standard of evidence.”

I started thinking more about what influences the reliability of a scientific result when analyses of my post-doc data weren’t lining up with published findings from other studies of the same species. When I encountered this problem with reproducibility, the causes I first focused on were old standbys like multiple tests of the same hypothesis driving up type I error and the flexibility to interpret an array of different results as support for a hypothesis. What I wasn’t thinking about was low prior probability – if we test an unlikely hypothesis, support for that hypothesis (e.g., a statistically significant result) is more likely to be a false positive than if we’re testing a likely hypothesis. Put another way, a hypothesis that would be surprising if true is, in fact, less likely to be true if it contradicts well-supported prior empirical understanding or if it is just one of many plausible but previously unsupported alternate hypotheses. Arguments that I’ve heard against taking prior probability into account are that it isn’t ‘fair’ to impose different standards of evidence on different hypotheses, and that it introduces bias. I think the risk of bias is real (we probably overestimate the probability of our own hypotheses being true), but I think the argument about fairness is misleading. Let’s consider an example where we have a pretty good idea of prior probability.

Continue reading

An iconic finding in behavioral ecology fails to reproduce

[This post has been originally posted on ecoevotransparency.org]

Just how reproducible are studies in ecology and evolutionary biology? We don’t know precisely, but a new case study in the journal Evolution shows that even textbook knowledge can be unreliable. Daiping Wang, Wolfgang Forstmeier, and co-authors have convinced me of the unreliability of an iconic finding in behavioral ecology, and I hope their results brings our field one step closer to a systematic assessment of reproducibility.

Continue reading

A conversation - Where do ecology and evolution stand in the broader ‘reproducibility crisis’ of science?

[This post has been originally posted on ecoevotransparency.org]

In this post, I float some ideas that I’ve had about the ‘reproducibility crisis’ as it is emerging in ecology and evolutionary biology, and how this emergence may or may not differ from what is happening in other disciplines, in particular psychology. Two other experts on this topic (Fiona Fidler and David Mellor) respond to my ideas, and propose some different ideas as well. This process has led me to reject some of the ideas I proposed, and has led me to what I think is a better understanding of the similarities (and differences) among disciplines.

Continue reading

Reproducibility Project - Ecology and Evolutionary Biology

[This post has been originally posted on ecoevotransparency.org]

The problem

As you probably already know, researchers in some fields are finding that it’s often not possible to reproduce others’ findings. Fields like psychology and cancer biology have undertaken large-scale coordinated projects aimed at determining how reproducible their research is. There has been no such attempt in ecology and evolutionary biology.

A starting point

Earlier this year Bruna, Chazdon, Errington and Nosek wrote an article citing the need to start this process by reproducing foundational studies. This echoes early research undertaken in psychology and cancer biology reproducibility projects attempting to reproduce the fields’ most influential findings. Bruna et al’s focus was on tropical biology but I say why not the whole of ecology and evolutionary biology!

Continue reading

Why ‘MORE’ published research findings are false

[This post has been originally posted on ecoevotransparency.org]

In a classic article titled “Why most published research findings are false”, John Ioannidis explains 5 main reasons for just that. These reasons are largely related to large ‘false positive reporting probabilities’ (FPRP) in most studies, and ‘researcher degrees of freedom’, facilitating the practice as such ‘p-hacking’. If you aren’t familiar with these terms (FPRP, researcher degrees of freedom, and p-hacking), please read Tim Parker and his colleagues’ paper.

Continue reading

Replication - step 1 in PhD research

[This post has been originally posted on ecoevotransparency.org]

Here are a few statements that won’t surprise anyone who knows me. I think replication has the potential to be really useful. I think we don’t do nearly enough of it and I think our understanding of the world suffers from this rarity. In this post I try to make the case for the utility of replication based on an anecdote from my own scientific past.

Continue reading