The value of pre-publication peer review

I’m not ready to give up pre-publication review.

I see lot of discussion these days about the value of peer review. Are journals too selective? Are acceptance decisions arbitrary? Does peer review actually catch scientific mistakes or fraudulent practices? Wouldn’t it be better to just put everything out there, say on preprint servers, and separate the wheat from the chaff in post-publication review? I’m not quite ready yet to give up on pre-publication peer review. I think it serves a useful purpose, one I wouldn’t want to do away with. In the following, I discuss four distinct services that peer review provides, and assess the value I personally assign to each of them.

Peer review screens out nonsense and pseudoscience

It’s important that somebody screen all potential scientific publications for actual scientific content. I don’t mind publishing null results, replication studies, or studies that present only a very minor advance. All of these works contain real science, and they may find some use at some point in the future. However, we must never mix science with pseudoscience. Someone has to assure that whatever gets published in a scientific journal is not complete nonsense. Even the preprint archive arxiv.org has some sort of a screening and filtering system in place to hold back the crackpots. In most cases, nonsensical papers would be caught by the editor and not even sent out to review. Nevertheless, we can consider filtering out nonsense to be an essential service of the pre-publication review process.

Peer review catches major mistakes and/or fraud

Many people seem to think that it is the reviewers’ job to catch major mistakes and/or fraud. And when they fail to do so, that is taken as evidence that peer review doesn’t work. I don’t think we can put such a high burden on the reviewers. Ultimately, the burden of producing correct and genuine results lies with the author. Peer review operates under the assumption that fundamentally the authors are honest and reasonably capable scientists. If peer review does happen to catch a major issue with a paper, that’s great, but generally I think that post-publication review is the much better venue to address major flaws or scientific misconduct.

Peer review assess novelty, potential impact, and fit with the journal scope

Whether reviewers (or editors) should consider novelty and impact, and whether journals should be selective at all, is probably the most contentious issue in peer review. Traditionally, this has always been part of peer review. However, there are now several journals that explicitly state review should only assess scientific soundness (e.g. PLOS ONE or PeerJ). I think there are valid arguments for both sides. On the one hand, it is imperative that we have publishing venues that will publish any scientifically sound study. Nobody benefits if a valid study is suppressed just because some reviewers didn’t find it interesting. If there’s no obvious scientific flaw, put it out there and let the readers (and Google) sort it out.

On the other hand, I think that more selective journals can provide value as well. In my mind, where science has gone off-track is that the most selective journals (which are also considered to be the most prestigious ones, e.g. Nature, Science, Cell, PLOS Biology, PNAS) employ arbitrary selection criteria based primarily on the subjective goal of publishing “the best science.” As a consequence, whether I can publish in such journals depends much more on my marketing skills than on my scientific skills, and also on whether I’m working on a sexy study system.

By contrast, the next lower tier of selective journals usually employ more objective selection criteria, and those arguably provide a useful value. For example, I’m an Associate Editor for PLOS Computational Biology, a fairly selective journal. The main requirement for publication in PLOS Computational Biology is that you have produced high quality computational work that yields a novel biological insight. In my mind, it is fairly straightforward to determine whether a paper satisfies that requirement or not. I also think that any capable computational biologist can jump over that bar. As a consequence, I feel that we’re providing useful selectivity without generating excessive artificial scarcity or making highly arbitrary decisions. If I see that somebody has on their CV a couple of PLOS Computational Biology papers, I can reasonably assume that they are doing consistent, high-quality computational work leading to novel insights into biological systems.

Peer review helps authors improve their articles

In my mind, this last point is the most important point, and the reason why I’m not willing to give up pre-publication review in its entirety. In my experience as author, reviewer, and editor, the most common outcome of the review process other than “reject due to insufficient novelty” is “major revision.” The reviewers agree that the study has merit in principle, but they see a number of possible revisions that would improve the article. I have seen it countless times, both as author and as reviewer or editor, that a study was vastly improved after the first set of reviews. Sometimes reviewers catch an issue the authors hadn’t noticed, sometimes they have a really cool idea that brings the study to the next level, and sometimes they simply tell you that you have to work on your writing if you want to get your point across. Either way, this input is invaluable, and it improves the scientific literature tremendously. If we went to a system that operated entirely on post-publication review, we would probably still see the same kind of comments by reviewers, but there would be very little incentive for the authors to go and revise their papers accordingly.

One downside to this aspect of peer review is that sometimes reviewers just keep insisting on changes that the authors don’t deem necessary or appropriate. This is another form of peer review gone wrong. The reviewers should make helpful suggestions, but they should not tell the authors how to write their paper. One solution to this issue is to make peer reviews public and leave with the authors the ultimate decision of whether or not they want to publish, as Biology Direct does. Another possibility is to allow authors to opt-out of re-review, as BMC Biology does.

Avatar
Claus O. Wilke
Professor of Integrative Biology

Related

comments powered by Disqus