A solution to the reviewer problem

Not getting anyone agreeing to review is a growing concern of journal editors. Often editors have to send requests to 10-20 potential reviewers so that one or two agree; keeping the time commitment, the repeated reminders required apart. Editors therefore have a tough job. They make it easy partly by making the invitation and reminder process automated. But that doesn’t address the root cause of the problem.

We had an interesting experience over the last couple of years. For one of our papers, three journals one after the other, took substantial time each with no progress shown on the online updates. After substantial delay, (6 months in one case) they returned the MS saying that they did not find any editor/reviewer agreeing to handle/review the MS. The 6 month delay case was with PLOS One which actually has a large network of editors and reviewers from different fields. I am sure they tried their best to find an appropriate person to agree, but with no result. This is just one example. Experiences across the board show that the problem of not getting reviewers is genuine and widespread.

Then something unexpected (not unexpected for me though!!) happened to our paper. After spending almost two years, trying out many journals and not getting a single review, we decided to publish it in the open peer review journal Qeios. The journal has an AI system to send requests to reviewers in the field. In addition, any reader is welcome to post comments. All comments get publicly posted immediately. The author responses and revisions are also open in the public domain. On posting the paper on this platform something miraculous happened. Reviews started flowing in within a couple of weeks. Today after about 7 weeks, there are 16 reviews received. I just completed replying the reviewers and posting a revised MS. This response contrasts the prior experience of not getting a single review in two years in multiple journals in the prevalent system of confidential peer reviews. So the contrast is between zero reviews in two years versus 16 reviews in 7 weeks.

How does the quality of reviews compare? No comparison specific to this paper is possible because there was no review in the traditional system. A systematic comparative analysis across the two systems with sufficient sample size is not possible unless some of the traditional journals make their reviews public. The traditional journals just want to avoid getting exposed so they will never do this. So I can only talk anecdotally. In my experience with prior publications, there is no difference between the two in the average quality of reviews. On an average both are equally bad. A minority of reviews are really thoughtful, rigorous, appreciating and critical on appropriate issues and therefore useful to improve the quality of the paper. For the reviews on this paper I would say a greater proportion of reviews turned out to be insightful and useful for revision. But there were poor quality reviews as well. Here is the link to the revised paper, all the reviews and replies (https://www.qeios.com/read/GL52DB.2). Readers are welcome to make their own opinion.

Talking about  my sample size of a few hundred reviews, majority of reviews have been very superficial, saying nothing critically about the central content and making a few suggestions like improving figure 3b or making the introduction more concise. Some comments such as more clarity is needed are universally true and therefore completely useless unless some specifics are pointed out. A comment about language correction is so common. When the author names sound non-British, a suggestion to get the manuscript refined by a native English speaker has to be there.  I tried taking help from professional language service and even after that this comment had to be there. Some reviews are entirely stupid and irresponsible. Precisely this entire range is seen in the reviews received in open peer review system. But the proportion of sensible reviews appears to be slightly greater in open peer review system.

Why is it that conventional journals found it impossible to get reviewers and for the same paper open peer review system had so many reviews in a short time? I can see two distinct but interdependent reasons. One is the accept reject dichotomy. Any kind of debate, differences of opinions and suggestions are useful and important for science. But a reject decision stops this process. The accept-reject dichotomy has actually defeated the purpose of peer reviews. The Qeios system takes this dichotomy away. One of the beliefs behind confidential peer reviews has been that by being anonymous reviewers can avoid the rage from authors and the resultant personal bitterness. The scientific community actually welcomes debates of any kind. They are irked by rejection which takes away the opportunity to debate. Here the authors are always given the freedom to respond and therefore reviews do not end up irritating the authors despite critical comments. Reviewers are not afraid of spoiling the relations and a healthy debate is possible. I suspect that if the social fear is taken off, reviewers actually like to publish the comments with their identity disclosed. The second factor contributing to reviewers’ willingness is that they get a credit for their review and a feeling of participating in the refinement of the paper and thereby progress of the field. This is the true and healthy motivating factor. Other suggestions such as paying the reviewers are unlikely to have the same motivational effect. Reviewers actually seem to like the idea of their comments getting published. This I think is the reason why we received 16 reviews for a paper that did not get a single reviewer in confidential review system.

A promising inference coming from this experience is that there is a simple solution to the problem of reluctance of reviewers and that is to remove confidentiality, discourage anonymity and make the peer reviews public. If accept reject decisions are necessary at all, let a discussion between the editor and authors decide it. Reviewers need not give any accept-reject recommendation. They only write their critical views. If the reviews expose fundamental flaws in the manuscript, authors themselves would like to either remove them or withdraw. If they don’t, their reputation suffers because the flaws pointed out by the reviewers also get published along with the paper.

All this can work only with the assumption that there are readers that actually read the papers and the comments as well. About this I am not so sure or hopeful. The culture of making a judgment without reading has gripped the entire field very tightly. I can only hope that when the reviews become open, readers will stop confusing between peer review and validation. Reader will stop relying on the blind faith that reviewers have done a good job and what they read now is reliable and true. Instead, readers would use their own judgment aided by the peer comments and as a result the reading culture will have to improve. If the entire community continues to make some quick judgments only based on the name of the journal, at the most reads only the title and abstract and feels no need to read further, then only God can save science.