How to produce robust, reliable and open research
For Open Access Week 2020, Dimity Flanagan (Manager, Scholarly Communications) is speaking with researchers across the University of Melbourne about why Open Access is important and the practicalities of making their research open.
Today we are speaking with Dr Hannah Fraser, a Research Fellow in reproducibility and reasoning at the University of Melbourne and President of AIMOS: the Association for Interdisciplinary Meta-research & Open Science. Until December, Hannah is the research coordinator for the repliCATS, a project that aims to crowdsource predictions about the replicability of published research. After that, Hannah is starting a consultancy to assist people to make their science more open and reproducible in ways that fit with their current logistical or organisational constraints.
Q. How do you apply the concept of openness to your research and the outputs you produce?
I aim to make my research open enough that all researchers can access my research and critically assess the results. There’s lots that goes into this (including making data and analysis code openly available and publishing everything as freely available preprints). The two things I want to talk a little bit about are preregistering studies and thoroughly describing methods.
The idea of archiving the plan for research before it’s conducted has been around for ages. The process is mandated for clinical trials in many countries to ensure that unfavourable results aren’t suppressed by pharmaceutical companies. More recently it’s taken off (primarily in psychology) as a response to the recognition that many results in the literature are statistical artefacts caused by flexibility in data collection and analysis. Many fields like ecology (which is where I came from) have been slow to take up the practice but it is my favourite part of doing research. Preregistration helps you to carefully operationalise and plan your research and it ensures that you keep that original plan in mind as the research unfolds. It’s so easy to get carried away with interesting things that come up during the research and lose track of your original plan.
Thoroughly describing methods
I think the need to thoroughly describe your methods is among the least appreciated open science practice, probably because there’s an assumption that we’re already doing this, or that if we don’t do this it will be picked up by peer review. Unfortunately, it’s my experience that methods sections often leave out information that would be required to repeat the study, or even accurately evaluate the methods. It is hard to provide general rules on what should be included in the methods section, but it should give the reader sufficient insight to determine how likely the study is to be reliable or generalisable. I would suggest including everything you might be required to describe in your ethics application/preregistration and any procedures that you would be inclined to include in your thesis. Some journals have word limits that restrict how much you could include in a methods section. In this case I recommend including a full description of your methods in a repository (ideally with your analysis code and data) or as supplementary information.
Q. Why is openness so important for building trust in science?
I think the best way that researchers can build public trust in science is to make sure that they are conducting reliable, robust, generalisable science that proves to be useful in practice. This influences peoples’ trust in science, though I think that political and media representations of science also play a major part in whether the public trusts or doesn’t trust science.
In terms of how to ensure your research is reliable, robust and generalisable, I consider the most important factors to be:
Carefully operationalising and planning your research to start with
It’s important to ensure that you’re collecting and analysing data in a way that answers the question you have. If that’s not possible, you can always ensure that you’re clear about precisely what question you will be able to answer and record this. Preferably in a date stamped preregistration so that you don’t accidentally drift away from this idea as you conduct the research. Reading hundreds of articles as part of the repliCATS project has shown me that most studies could have been operationalised or planned better.
Making sure you have enough data to be confident about your outcomes
If you have too little data, you are both less likely to find a true signal and more likely to find a false signal in your data. In order to be confident that you have enough data for the analysis you want to conduct, it is often necessary to conduct a power or precision analysis prior to collecting data.
Conducting sensitivity analyses
Often studies present the results of one statistical analysis as the answer to a research question. However, there are always multiple viable ways to approach an analysis. It’s good practice to use a few different analysis techniques to give you an idea of whether your result is robust to analytic decisions. These auxiliary sensitivity analyses may not make the cut for the article itself but should be included in supplementary material or in a (linked) repository.
Clearly describing how generalisable you expect your results to be
As the person who designed and conducted the research, you’re uniquely qualified to suggest how generalisable you expect the findings to be. For example, if I collected data on birds living in Victorian woodland environments, I would use my expertise to determine whether any conclusions I make should be restricted to birds in woodland environments in Victoria, or whether it is reasonable to expect the same result for birds in NSW woodlands or Victorian forests. This judgement call is an important part of being a researcher but it’s tricky because you can never be sure how far something will generalise until you test it.
Conducting research openly so that others can tell you’ve done 1 – 4.
The value and reliability of any science is determined by the choice of topic and the quality of the methods implemented.
The benefit of openness is that it provides the information required for other researchers to access your research and judge this for themselves.
Q. Many researchers currently make their papers OA through paying an Article Processing Charge, thus shifting the inequity of access from reading to publishing. How important is it to you that all aspects of the open science movement incorporate equity and inclusion?
I think it’s absolutely vital that all aspects of the open science movement incorporate equity and inclusion. However, I personally don’t believe that the current journal publishing system is capable of becoming equitable and inclusive. It’s a fundamentally broken system filled with commercial interests and perverse incentives. I publish my articles as preprints prior to submitting them to journals. Publishing a preprint on a server like PsyArXiv, EcoEvoRxiv, or BioRxiv is free and means that anyone can access the article for free.
Q. If any researchers were interested in becoming more engaged with community networks around open science or reproducibility, what suggestions would you give them?
Consider coming along to the Association for Interdisciplinary Meta-Research and Open Science conference this year (AIMOS2020)! It’s online, inexpensive and we offer free registration for people who are unable to pay. You’ll hear from and interact with people at the cutting edge of meta-research and open science. This year’s AIMOS conference is 3-4th December 2020. For further details please visit the conference page. You should also consider checking out The Australian and New Zealand Open Research Network (ANZORN) or getting in touch with an Open Science Ambassador.
This blog post is released under a CC BY license.