My diagnosis: Why Indian science remains mediocre

Science seems to be monopolized by a few elite institutions and a handful of prestigious journals. Science coming from all other places including India is mediocre (At least that is the prevalent belief.).  What is the reason for this? In my diagnosis it is certainly not infrastructure, funding, talent or collegiality. The true causes are the religious beliefs about journal prestige and peer reviews. The belief that only publishing in big journals makes science big has no foundation. The idea is a belief that has never been challenged or tested with data. It is there only because of the laziness to read. Reading the journal name is a lot easier than reading the paper and by human nature scientists only do whatever is easy.

The second belief, on the other hand, has been tested with data. The hypothesis that prestigious journals have more rigorous peer review system and they publish papers based on their quality has been tested with experiments and data and found absolutely wrong. Papers are accepted not by its content but by the prior reputation of authors and their institutions. This is very much evident by well-designed studies and analysis of editorial data. The behavioural origins of biases have also been made clear. So there is substantial literature on how much, how and why peer reviews are biased. But so far there was no consideration of the consequences of peer review bias. This I started doing in the form of a simulation model in this paper.

The logic is simple. Research funding, researcher’s motivation and publication output work in a positive feed back vicious cycle. Because of the auto-catalytic nature of the process a small peer review bias can result into a large difference in the output. And in reality, the peer review bias is not small by any standard. For the same manuscript, the odds of acceptance from a reputed location are over 6 times that of obscure authors. The difference between acceptance rates across countries is of the same order. Science published its editorial data analysis that shows a huge difference between acceptance rates of US versus China. India is not even considered in the analysis. As expected, much of the country based rejection is without reading and reviewing the content. They have admitted this difference and haven’t ruled out editorial bias as the main cause. They failed to reply to any cross question as well.

When a paper gets rejected, much time and efforts are needed to revise and resubmit, there are increased chances of it getting scooped during this time. Failure to publish affects further funding and thereby the downstream productivity. In simulations a small bias (10 %) in peer review resulted into one or two orders of magnitude difference in productivity even when no difference in research caliber was assumed. More important than that, the difference was escalated by increased competition. This is the reason why the problem is more intense in today’s highly competitive academia.

But more important is the second model of the paper which is about optimizing novelty. The model highlights the logical possibility that there an optimum level of novelty of research ideas which brings maximum success. Because of asymmetry in peer reviews the optimum for the elite and non-elite institutions can be widely different. In an obscure location, researchers doing mediocre work are more likely to be successful than researchers having out of the box, even revolutionary ideas. This is the most important behavioural reason why science coming from such places is mediocre. There may be and there is extraordinary talent in any and every part of the world. But extraordinary science is most unlikely to get published from obscure places. Talent gets discouraged very soon and by the innate tendency of the human mind to optimize, starts thinking mediocre, because that gives greater returns.

If this is true, the way out becomes clear. First researchers from non-elite locations need to stop giving any importance to getting published in prestigious journals. I have seen pulp or even fraudulent papers getting published very frequently in top ranking journals. So the contents of a paper need to be viewed independent of where it is published. The fact that peer reviews are inherently biased needs to be publicly admitted and publication policy changed accordingly. India is particularly lucky to have a set of journals published by many Indian academies which are free to authors as well as readers. The academies need to make the peer reviews open in the public domain independent of acceptance/rejection. Research evaluation, in my view is an inherently bad idea; but if inevitable, research published only in transparent peer review journals need to be considered. If this policy is declared, researchers will publish in open peer review journals. That would make peer reviews more responsible.

All this needs honesty and courage. If that itself is lacking (which, I am afraid, is true for typical Indian academics) then such people will only keep on doing mediocre science. The colonial dominance will continue using biased peer reviews as the main power weapon. Science will be monopolized even more and the global power imbalance will keep on increasing.

Since science is an economic and political power weapon today, peer review bias is subtly driving global economic, social and political imbalance and nobody seems to realize this. There is substantial literature showing strong peer review biases and imbalances, so why are we shy of even talking about it? I see cowardice as the only possible answer.  India has huge research talent but this cowardice will destroy India’s science. Any amount of funding and infrastructure support will not be able to change the picture qualitatively. We need to reject the prevalent publication systems and start from scratch on our own. I am sure, if such a bold step is taken it will revolutionize global science making it more equitable and humanitarian.