Why Peer-Review Fails

Most scientific journals – and probably every reputable one – use peer-review to ensure the quality of their articles. This is what we call the “gold standard” for scientific publishing.

If you, an upcoming scientist, want your work to be accepted by the community, it must pass the rigorous assessment of the most respected experts in the field. And with their feedback, every last error will be eradicated and the eventual publication will accurately reflect our best insight.

But here is the problem: Peer-review is bad.

Image via Pixabay

To be fair, this is an overstatement – but there are numerous aspects of bad science peer-review is not equipped to handle properly. Some of which I want to introduce you to in the following paragraphs.

Peeking through blindness

In order to have an unbiased assessment of the article’s quality, we wish for peer-review to be double-blind: neither the author(s) nor the reviewers know about each other.

However, as science becomes both more specialised and more collaborative, finding impartial reviewers becomes a challenge – there is exchange about current projects, preliminary findings are presented at conferences or all experts might even be part of the same project.
Interestingly, numerous journals therefore invite authors to suggest experts to review their submitted papers – which, as analyses show, increases the likelihood for acceptance. Not the best upshoot for an unbiased evaluation, right?

Moreover, many sciences show a rise in pre-prints. Pre-Prints are manuscripts that are made openly accessible before or during peer-review, to increase and accelerate distribution of knowledge. There are many arguments why pre-prints add to the scientific process, but they undoubtedly make blind-review impossible.
When peer-review is only single-blind (i.e., reviewers know the authors’ identity), biases show up – not the research is judged anymore, but prestige and status. The linked study shows how there is a noticeable trend to accept more willingly when authors’ institutions are more prestigious.

That means that oftentimes in peer-review, research is not assessed purely on its merits, but unrelated factors play a role. This definitely impacts the published literature – but adding to reviewer biases, the published literature is plagued by other problems.

Fake it ‘til you make it

The bigger issue is of another nature: shockingly, researchers may not always be the most honest.

Although that is (hopefully) not a widespread issue, some people in academia do not conduct honest research, but make up data to win the game of publishing. In many cases, these are eventually uncovered – but only after a long process of proving cases. All of these have one thing in common: the issues are caught post-hoc, long after the review-process has been conducted.
Some individuals managed to amass an impressive amount of papers until they were caught (e.g., Diederik Stapel, a social psychologist, with 58 papers that had to be retracted). For any interested readers, you can find a list of high-scorers here.

To add to that, the website RetractionWatch offers a comprehensive collection of retracted papers (over 20,000 to date) and their blog posts expand on background information and the investigations – but their collection ranges from major cases of fraud to retractions based on some small errors.
And while cases of outright fraud are horrendous, they are comparably rare – a bigger issue are genuine errors that are not caught.

Error detection

The major problem is that peer-review does not properly catch errors in conducted research. While reviewers often check cited papers or the argumentative rigor, the specific analyses and reported data are often just taken at face value.

You might believe that researchers know what they are doing, they would not be in that position otherwise…

But interestingly (or rather shockingly), this often is not the case. Especially with statistics, a surprisingly large number of papers contains plain impossible statistical results, For instance, the validation of StatCheck, a program that only checks for the compatibility of reported statistics in a paper with each other, found at least one error in half of over 250,000 psychology papers.

The plain existence of such (and more) errors in the body of published literature tells us that something is fundamentally wrong with peer-review today.

 

These are only some of the issues academia faces related to peer-review – many more could be told or fleshed out in more detail. Now, does all of that mean we should get rid of peer-review, when it apparently has major flaws?

I do not think so, because peer-review still adds a valuable check on research that might get published. But we need to take care when reading studies – there might be something wrong with what is presented, and everything has to be seen through a critical lense. And more broadly, the culture around peer-review needs to change to be better equipped to sort out such bad research.

But that will not be solved by an undergraduate’s blog post, which is why I will leave it at that.


8 Responses to “Why Peer-Review Fails”

  1. Leonhard Volz says:

    Thank you!
    That is a good question. I think many issues have been around for ever, but are amplified today (or could be structured better). The current way of peer-review was designed for a different era of science & may have been better suited for that time, but is not the best fit for today anymore.
    So the problem might be more that we do not improve it even if we see room for improvement.

  2. Leonhard Volz says:

    Glad that you enjoyed it!
    Yes, that is very true – the reviewer must be engaged to make a helpful contribution.

  3. Leonhard Volz says:

    Thank you! Yeah, that is true – but also true for pretty much everywhere, unfortunately.

  4. Leonhard Volz says:

    Thanks, that is a very good question. I think that it is primarily the way peer-review is practiced right now (which is connected to other problems in science). In my opinion, there should be much more attention drawn to it with thorough checks of the cited research, data & analysis. But that would require funding for such work and a more open culture around publishing (and less pressure to publish as much as possible).
    So there are issues around it, but they can be overcome with fundamental changes to the structure around peer review – some form of peer-review will always be crucial, though.

  5. Olivia says:

    Interesting post! Recently I have been using older papers (from like the 1950s) for my assignments because I can’t find any more recent research. Your post has got me thinking about weather the problems with peer review has always been there or if it’s a more modern problem thanks to our more connected and global society. What do you think?

  6. shahada says:

    I enjoyed reading your blog. As a research student I do love peer review, it gives a good feedback sometimes on what we could fix, or what looks good. Again, it highly depends on who is giving the feedback and how much interest they have in your article.

  7. Jun.P says:

    Good issue being pointed out in this blog. As far as I’m aware, Australia is also a victim of unreliable research. I’m pretty sure there have been cases in our uni as well, unfortunately.

  8. Naushad Talati says:

    This is worth drawing attention to so thanks for writing about it. But can peer review be treated as part of the problem or a solution that isn’t used properly?