By SORTEE | November 15, 2021
[SORTEE member voices is a weekly Q&A with a different SORTEE member]
Name: Malika Ihle.
Date: 02 July 2021.
Position: Reproducible Research Oxford coordinator.
Research and/or work interests: I’m a former behavioral ecologist who worked on mate choice and extra-pair behavior. I am now promoting open research across disciplines at the University of Oxford by providing training, building communities, and liaising with stakeholders to inform the design of incentives and policies. I am also acting as the local network lead of the UK Reproducibility Network (UKRN).
What’s an ‘ORT’ subject or practice that you think deserves more attention?
I hope Registered Reports will become widespread for all experimental research. When we conduct an experiment, we explicitly tell the reader that we had a hypothesis and experimental design planned a priori and that the analyses we perform only had a 5% chance of leading to a false-positive finding. Why not adopt this practice to make this completely true? Eliminate (conscious or unconscious) p-hacking, confirmation bias, HARKing, selective attention bias, publication bias, etc., by submitting introduction and methods prior to collecting the data? I haven’t come across any logical reason not to adopt this practice for experimental research, and logistical reasons (e.g. long waiting time for review before starting data collection) are on their way to being alleviated (e.g. you can schedule your submission to Peer Community in Registered Report so that reviewers are already lined up and ready to turn over your submission within a short time frame). What I personally find enticing with this format, is that you have to figure out your statistics (for instance by simulating data and data analysis and by getting feedback from colleagues or even a statistician and the peer reviewers) before collecting real data, making sure you don’t waste any resources (i.e. sometimes money, time, but also animal lives) in collecting data in a way that would later turn out suboptimal for powerful statistical analyses.
What do you now know about the way science gets done that you would have found surprising before you started your training as a scientist?
When I began my scientific journey, I thought rigor, reliability, reproducibility, replicability, were what characterized science by definition (as opposed to anecdotes, coincidences, and beliefs); that these were criteria that scientists were judged on. The sad reality is that scientists are currently judged on the number of papers they produce, their story telling, and the prestige of the outlet they publish in. On the one hand, this breeds a research culture centred around unhealthy competition (as opposed to collaboration), which in turn fuels bullying, burnout, and ill mental health. On the other hand, the emphasis on “getting things published” over “getting things right” sometimes lead researchers to “cut corners”, favouring quantity at the expense of quality, therefore contributing to the replicability crisis observed in many fields. The overall outcome is a vicious circle where an unhealthy research culture and the replicability crisis feed each other. While I loved the research I did when I was an academic, I found the post doc situation stressful and I didn’t want to have to “play the game” and compromise my integrity. I now have the opportunity to work full time on helping to change the research culture and promote open research practices at the level of a University - which I find to be incredibly impactful and rewarding.