I will try to explain why I think transparency in the review process is a solution to most of the problems associated with scientific publishing. But first of all why are peer-reviews necessary or are they; and if they are, what purpose do they serve?
Personally I feel peer reviews are certainly desirable if not “necessary” per se. Certainly, good science was being published prior to 1960s when only a few journals were practicing anonymous peer reviews. But the community has changed substantially and publishing has changed even more. Earlier, subscribers paid for a journal and authors did not pay for publishing. Today the trend is that the authors pay and readers enjoy free access. This is not only a technical change. It reflects the changing community, the changing market forces in scientific publishing. So what worked in the early 20th century may not work today. Analysing this change is worth a whole thesis, but for the time being, I will only pen down my personal ‘belief’ that peer review has become indispensable in the context of today’s scientific publishing.
What are peer reviews intended to do and what do they actually do? I believe the original idea of peer review was to supplement the thinking of one research group by others in the field, who may have a somewhat different vision. It is possible and natural that researchers are often carried away by their own hypothesis or that they might have adopted a narrow view. This is likely to lead to partial blinding and inability to view the other side of the problem, if any. Peers from the same field but with different points of view can complement or at least point out other possibilities, any flaws or paradoxes that might be arising along with the new finding being reported. Addressing the concerns of such peer reviewers, the manuscript quality can improve substantially. Such a peer review would be extremely helpful in any field of science.
However, this is hardly the purpose that today’s peer reviews serve. They are mainly used to decide acceptance or rejection of manuscripts. Often the accept/reject decision is taken first and then elaborate justifications are sought for the decision. Often this is the main purpose and content of the review report. This is partly because most journals are flooded with submissions and are looking for convenient tools to get away from this overburden. So the original purpose of peer review is completely lost.
This, in itself, is a substantial degradation of the peer review tradition, but let us accept this as inevitable and see whether peer reviews serve at least this function reasonably well. This is a valid question and should potentially be testable. But it cannot be tested in the present scenario because the data are not available for testing. So first of all, simply in order to convince the scientific community that peer reviews really serve the purpose, all peer review data should be made available, which means the peer reviews should be transparent to everyone.
There are more reasons for insisting on transparency. Although good journals can be assumed to choose reviewers carefully, there is no guarantee that ultimately these individuals actually review the manuscript. It is just too common all over the world that leading researchers find themselves too busy to give time to a seemingly ‘unproductive’ work and they ask their students, even undergraduates to do the reviews. This practice is common throughout the world, but no data on it are available. If they have to do it themselves, they can at the best devote limited time to it. This often results into irresponsible reviews. Good researchers would not like to be called irresponsible, but that is unlikely to ever happen because hardly anyone knows who has written the review report. More responsible reviewers will not take a review commitment unless they can devote sufficient time to it. Here lies the major problem. The result of their responsible behaviour is that there are more irresponsible reviewers in the field. If the reviewer remains anonymous, the blame of irresponsible reviews goes on the editor. The editor presumably would like the review process to be more responsible, but for an editor, finding a reviewer is the foremost problem. Since most responsible researchers will not commit unless they have sufficient time, which they never have, the editor has hardly any choice. A transparent review will expose irresponsibility, if any, and I think that may be sufficient to bring in more responsibility in the review process.
Third is the problem of journal quality. The best way to judge a journal’s quality is to look at the rigor and quality of reviews. But since this information remains hidden, other indices such as impact factors get an upper hand. There has been serious criticism of impact factors and other numerical indices, but since the important remains hidden, we make the easily visible important. If all peer reviews are made publicly available, the quality of the editorial process will become transparent and grading of journals can be based on what really matters in scientific publishing.
This will naturally drive away the menace of predatory journals. The definition of a predatory journal is not that they extract money from authors. Many good journals also do it. The definition depends upon absence of or having a fake review system. If reviews are transparent, they will be compelled to undertake serious reviews and if they do so they will no more be “predatory”. So transparency of the review process is necessary and sufficient to eliminate predatory journals.
Authors of rejected papers are often cribbing about unfair review. This might be often, if not always, true. There is a feel that manuscripts from authors from less known organizations or countries are more likely to be returned without review or face review by second grade reviewers. There is no way to test this since no data are available. Reviews can be unfair, biased or irresponsible. Even if a reviewer recommends rejection, the comments should be useful in improving the manuscript. My own impression based on the comments received on our manuscripts is that about 20 % of the comments are really useful in improving the quality of the manuscript, either for the same journal or for resubmission elsewhere; about 50 % are irrelevant to the central argument of the manuscript and attack some peripheral features of the manuscript or attack what has not been said in the manuscript; about 30% are factually wrong or unsupported. But my sample size is small, or my opinion might be biased. If the reviews are transparent, let readers decide whether reviews serve their main purpose or not, and for which journal the proportion of useful reviews is better.
But on top of all, I think science will benefit by studying the behaviour of different players in science and scientific publishing. Science is a human activity and all elements of human nature are very much there. Science does not progress by theorems, hypotheses and evidence alone. Certain components of the human mind and human social behaviour drive the progress of science. There are evolved psychological traits and diverse cultural traditions that decide how science progresses. What people in a field are ready to accept, what they reject and what they prefer to ignore are complex phenomena, not very clearly understood as of now. Studying these aspects of behaviour should have been an essential part of understanding science. However, the main source of data for such studies remains unavailable due to the unnecessary and religious confidentiality of the editorial process. It is utmost necessary to break this barrier.
What are the hurdles in bringing in transparency? One is the power structure in the scientific community. Transparency challenges the power of the powerful and we can expect extreme resistance to transparency from the politically powerful lobby in science. For an open minded, well intended true scientist, I am unable to imagine any reason not to support transparency of the editorial process. Anonymity at the most, is understandable, and sufficient to minimize personal conflicts potentially arising from review reports. But there is absolutely no reason why the reports should not be made public. The other hurdle would be from authors lacking confidence. If they feel their manuscript has weaknesses which are exposed by the peer review, they could be reluctant to publish them. But certainly any honest editor or reviewer, and any confident but open minded author would certainly support transparent peer review systems.
How to publish peer reviews?
Scientific publishing has changed substantially by the emerging pre-prints culture. Pre-print services also allow uploading revisions. Now just one more step is to include the review reports and authors responses along with the revisions independent of acceptance or rejection. My experience with BioRxiv is that they objected to disclosing the journal name, but published the comments and our responses that we uploaded. So a new path to publish peer reviews is opened up, although with some constraints. If you have posted your pre-print you can always post the review reports you receive. In case this doesn’t work, authors can do so on their own blogs. This is what my attempt here is. I have no idea at the moment whether and how it will click. What I mean by ‘click’ is a few more researchers feel like doing so. I presume authors confident about the quality of their work would respond positively, others will not. But in case the trend clicks, I have no doubt, it will revolutionize scientific publishing.
Links for a few more reviews of our manuscripts here:
- Inferring causality from correlations: This is an age old problem with a lot of philosophical discussions, but little usable sound methods helping actual causal inference. We hit upon what we though was a possible major break-through. We had usable methods to infer causality from correlations provided there were three or more intercorrelated variables, not two. There was simple but sound mathematical basis for all the methods. The only problem was that this was coming from a biology lab, not a mathematical statisticians’ group. We got five rejections in a row before publishing finally in PLOS ONE (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0204755). All the editor’s responses and the original manuscripts communicated are available Here. Interestingly some found it too complex and some thought the mathematics was too simple to publish. Nowhere, in the reasons for rejection, our central argument that causality can be inferred from correlations using a set of methods was challenged.
- We had communicated an article to Behaviour and Brain Science. It was rejected saying that we just published another article in a similar field and we do not publish many articles in the same field. Our article had no overlap with the central argument of the earlier published article, just that it was in the same field. Incidentally a few weeks later another article on the same subject was accepted, and that came to me for comments. I pointed out that the reason that you gave for rejecting our paper is not true since your journal does publish many articles in the same field. So give some other, more logical excuse for rejecting our manuscript!! This was followed by a series of emails in which a number of secrets of the editorial process were revealed, but the editor raised strong objections for making these emails public. So I am posting Here only the rejection letter and my response to it.