Louis Reed via Unsplash

Science Needs to Look Inward to Move Forward

Robust science depends on encouraging and incentivising more open and transparent practices in research – now, metascientists are looking at what works and what doesn’t.In a piece originally published in Pursuit, Andrew Trounson reports on this new discipline, including the contributions of Professor Fiona Fidler and Professor Simine Vazire from University of Melbourne.

About a year after she was appointed to a senior editorial role at an academic journal, psychology researcher Professor Simine Vazire was admonished for upsetting eminent researchers by ‘desk rejecting’ their papers.

She was shocked.

Greater transparency in research can ensure that research findings are given proper scrutiny. Photographer: Chris Liverani via Unsplash

Desk rejection is when a paper is declined by the editor before being sent out to reviewers – about 30 per cent of papers at this journal were typically desk rejected.

Professor Vazire was rejecting the papers because she believed they had serious flaws. But the committee that appointed her was worried that in upsetting famous researchers, the journal’s reputation could be put at risk.

“I pointed out to them that they couldn’t exert this influence behind the scenes without announcing a new policy or having some scientific basis for it,” says Professor Vazire. “But the fact that they were so surprised by my resistance made me realise just how much this was the way things typically worked.”

Professor Vazire’s experience is part of a broader long-running problem in research, where large studies have found that in many disciplines significant quantities of published research can’t be validated with follow up studies – that is the findings can’t be replicated, putting the results in doubt.

This has turned a spotlight on the fallibilities of journals, the peer review assessments they rely on, and the hyper-competitive world of academia where researchers are often wholly judged on how much they are publishing and in what ‘reputable’ journals.

It means researchers and their institutions can be tempted to hype results.

“As an editor, I’ve had discussions with authors where I’ve told them I will only accept their paper if it is framed more cautiously and, on a couple of occasions, authors have simply refused and gone and published elsewhere,” says Professor Vazire.

In the hyper-competitive world of research there is always the temptation to over promote results. Photographer: ThisisEngineering via Unsplash

Recent retractions by highly reputable medical journals The Lancet and the New England Journal of Medicine concerning research on potential COVID-19 medicines that relied on flawed data, has only highlighted the urgency of the problem.

Open science initiatives like sharing data and setting out a research plan ahead of doing the research (preregistration) seek to address these issues.

And alongside these developments, a new research discipline has also emerged – metascience.

“Metascience is a field that studies the norms, practices and incentives in science. It takes stock of new open science initiatives, and monitors and evaluates their impacts.

“It brushes up against the philosophy and sociology of science, but works in the service of science by seeking to ensure research is more robust,” says Professor Fiona Fidler, a reproducibility expert at the University of Melbourne.

Together, Professor Fidler and Professor Vazire have established a new research group at the University, MetaMelb. It is the largest metascience research group in Australia.

The group will study a range of metascience questions, across several disciplines including psychology, ecology and medicine, using a wide range of quantitative and qualitative approaches.

One such question is whether incentives for sharing data, like awarding open data badges, result in more reproducible outcomes. Another project looks at how exposing the flaws in science may be affecting the public’s view of, and trust in, science.

MetaMelb Research Group Website Home Page

Fears that the credibility of science is at stake have been used in the past as a justification for not being upfront about problems.

But, does science being more open and transparent really undermine public faith in science or does it actually increase that faith?

“I’ve been in rooms where the consensus has been that we should sweep problems under the rug because of the risk of making people anti-science, but I suspect by far the bigger risk is not being upfront with the public,” says Professor Vazire.

Another large project already underway, funded by the US government’s Defense Advanced Research Projects Agency (DARPA), is an effort to efficiently crowd source expert peer review from a group of experts.

The RepliCATS (Collaborative Assessment for Trustworthy Science) project has recruited 500 experts across the world to predict the replicability of 3000 research papers from the social sciences in small teams working collaboratively online.

DARPA has recently expanded the project to urgently assess 100 papers related to COVID-19 social science research.

Yet another initiative of MetaMelb is the development of concrete guidelines that peer reviewers could use to more uniformly assess research methods and findings, providing a sort of check list.

This would include encouraging peer reviewers to assess whether research is written up in what Professor Vazire calls an “intellectually humble” way. If a paper instead displays intellectual arrogance, that would be an immediate signal for peer reviewers to be wary, she says.

Researchers should aim to be intellectually humble and present their data as honestly as they can. Photographer: Chris Reid via Unsplash

“Intellectual humility is about ensuring that you give the people critiquing your work all the ammunition they need to find any flaws in your work”, says Professor Vazire.

“So it means that in their written introduction, for example, a researcher shouldn’t just cherry pick information depending on what side of a debate they are on or, that in their methods and results section, they similarly don’t just mention results that best suit their argument.”

The checklist will also point reviewers to specifically assess certain aspects of the research, like whether the sample size justifies the conclusions or whether the research methods are rigorous.

“Peer reviewers are notorious for not agreeing but perhaps if they are asked specific questions we will get more agreement on the strength of a piece of research,” says Professor Vazire.

But for initiatives like these to work, incentives need to be put in place to reward peer reviewers, journals and researchers to be more transparent.

One idea out there is to make peer reviews more public, acknowledging the contribution of the reviewer.

“The issue of creating the right incentives is an important area of research for us,” says Professor Fidler.

Researchers should encourage expert criticism of their own work. Photograph: sps universal via Unsplash

Ultimately, both believe that the right incentives and practices can be put in place to make science more transparent and robust. But the key will be changing the academic and institutional cultures and reward systems that are contributing to the problem.

“One of the biggest problems we have at the moment is the way institutions judge performance and promotion internally,” says Professor Fidler.

In the UK, efforts to ensure robust research has led to the creation of the UK Reproducible Network, a consortium that has so far attracted 15 universities and colleges.

Professor Fidler is working with colleagues to establish a similar consortium in Australia.

For Professor Vazire, whether researchers and institutions will be prepared to embrace new practices and greater transparency will be crucial.

“Certainly things are changing ­– more research is being shared before peer review and publication, which provides the opportunity for wider scrutiny. And, honestly, social media platforms like Twitter are becoming increasingly important in scrutinising papers and identifying strengths and flaws.”

“I’m confident the options will be there to make things better, but we need the research community to come together and actually take advantage of these new opportunities.”

Scientific journals rely on peer reviewers to ensure the strength of papers, but the process can be opaque and idiosyncratic. Photographer: ElasticComputeFarm via Pixabay
Feature image: Bristol Robotics Laboratory, Stoke Gifford, United Kingdom. Photographer: Louis Reed via Unsplash