Academics: Mend your house first!!

Behavioural and Brain Science is a journal that publishes theme article along with invited commentary from multiple individuals in the field. For those who believe in impact factors, the IF of this journal was 29.3 for 2022. Last year an article by John et al (2023) was accepted by the journal, published online with a call for commentary. The article was about what they called proxy failure, which is not a new phenomenon but the authors articulated different aspects of it quite well. Often it is necessary to quantify the success of something and the further path is decided by this measurement. When the goal itself is difficult to measure, some proxy is used to reflect the progress in reaching the goal. This might work initially but often the proxy becomes more important than the goal itself and then shortcuts to the proxy evolve that may sideline the goal. The system then is likely to fail or derail because of the proxy. The authors illustrated this with several examples from biological, social and economic sector.

What struck me immediately was that the biggest example of proxy failure is research under Universities and Institutes that are supposed to support research. The original article had only a passing mention of academia. I wrote a commentary on this article focusing on proxy failure in academia, which was accepted and is now published. Since the original article had a word limit, I am giving below a little more elaborate and non-technical version of the article. The original with cited references is available at  https://doi.org/10.1017/S0140525X23002984.  

A very well known example of proxy is exam scores. They are supposed to reflect the understanding of a subject. But typically the proxy becomes the goal and all education is oriented towards scoring higher in the exams. The same happens in research. Research papers are to be written whenever something new and exciting is to be shared with others. But today published papers has become a proxy to one’s “success” in research. Getting jobs, promotions and all depends upon how many papers one publishes and where. So inevitably publishing papers is prestigious journals has become the goal of research. In education there is much awareness, realization and thinking so that there are individuals and institutions specifically focusing on education beyond exam centered coaching. But this level of thinking is absent in research and hardly anyone focuses on addressing this problem.

 I feel it is necessary to deal elaborately with proxy failure in academia for two reasons. One is that proxy failure has reached unprecedented and unparalleled levels in academia leading to bad incentives. So much so that we can easily identify consequences of proxy failure far ahead of what the authors describe in various other fields. The authors describe three major consequences of proxy failure namely proxy trademill (An arms-race between agent and regulator to hack and counter-hack a proxy), proxy cascade (In a nested hierarchical system, a higher-level proxy constrains divergence of lower-level proxies) and proxy appropriation (goals at one level are served by harnessing proxy failure at other levels). At least three more advanced levels are observed in academia that might be difficult to find in other fields.

Proxy complimentarity: In this, more than one types of actors benefit in different ways from a proxy and therefore they reinforce each others’ dependence on the proxy resulting in a rapidly deteriorating vicious cycle. Since prestige of a journal is decided by the proxy namely citations of its papers and the impressiveness of the CV of a researcher is decided by the impact factors of the journals, the two selfish motives complement each other in citation manipulation. Citation manipulation has become common because it is a natural behavioural consequence of a system relying on proxies and not only because some researchers are unethical. It is extremely common and inevitable that reviewers pressurize the authors to cite their papers and the authors agree in return of paper acceptance. The fact that this is a common practice is revealed by data in published systematic studies. Institutions and funding agencies are benefited by the citation based proxies since bibliographic indices lead to a pretense of evaluation saving the cost of in depth reading of a candidate’s research. Reading has a high cost, but a selection committee can (and mostly does) make a decision without reading a candidates work, thanks to the proxies. Such mutually convenient positive feedback cycles can potentially drive rapid deterioration of the goal. This is becoming the norm so rapidly that now nobody even thinks there is anything wrong in evaluating someone without actually reading their work.

Proxy exploitation: This is another inevitable phenomenon in which apart from the agents in the game optimizing their own cost-benefits, a party external to the field achieves selfish goals by exploiting  prevalent proxies in the field. In academic publishing profit making publishers of journals thrive almost entirely on journal prestige as the proxy. Editorial boards appear to strive more for journal prestige than the soundness and transparency of science. This was evident in the eLife open peer review debate. The members of the editorial board who opposed the change in editorial norms, never said open peer reviews would be bad for science. They said it will reduce the prestige of the journal, which for them was obviously more valuable than the progress of science itself. More prestigious journals often have higher author charges and thereby make larger profits with little contribution to the original goals. That’s why the journal appears to prestige matter more than the progress of science.

Predatory proxy: This might be the most advanced and disastrous form of proxy failure where the proxy devours the goal itself.  The authors of the original article described the process of proxy appropriation, where the higher level goal does a corrective hacking of lower level proxies. For example, the marketing team might use the number of customers contacted as a proxy of their effort and this proxy can be bloated easily. But in business, the higher level player directly monitors the goal of profit making and accordingly controls proxies at lower level. This does not work in academia since the higher level organizations themselves do not have an objective perspective of the goal. The goal of progress of science is not directly measurable. As a result not only the proxies are used to evaluate individual researcher, they might often be confused with the progress of science itself. Here clearly the proxy has replaced the goal itself.

In many fields of science highly complex work involving huge amounts of data and sophisticated methods of analysis are being published in prestigious journals adding little real insights to the field. For example in diseases like type 2 diabetes, in spite of huge amount of research being published and funds being allocated, there is no success in preventing, curing, reducing mortality or even addressing the accumulating anomalies in the underlying theory. All that we have are false claims of success of any new drug, which get exposed when anyone looks at raw data. A number of papers exposing all this fraud are already published. Nevertheless large numbers of papers continue to get published, huge amount of funding is allotted which by itself is viewed as “success” in the field. Researchers publishing in high impact journals get huge respect and funding although the disease keeps on increasing in prevalence and the society has not benefited by the research by even a bit.

Failure of achieving the goal in not a crime in science, but quite often the failure is disguised as “success” and researchers receive life time “achievement” awards. Such awards have been given for diabetes researchers. No scientist receiving any such awards appears to have admitted that they have actually failed to “achieve” the real goals. Efforts of a researcher, failed by this definition, should still be appreciated but it should not be called “success” or “achievement” just because they published papers in prestigious journals. The worst outcome of proxy failure in academia is the failure to identify research failure as failure. Many other fields including theoretical particle physics or string theory have received similar criticism. Much intellectual luxury is getting published without adding any useful insights in the field. It is published in high prestige journals and therefore is called success although it contributes nothing useful or insightful.

In the last few years many papers have demonstrated that the creativity and disruptive nature of research has declined substantially. Interestingly this decline is evident even when it is measured by proxies. The three outcomes of proxy failure are most likely to be the reason for this decline in real scientific progress. Simultaneously the frequency of research misconduct, data fabrication, reproducibility crisis, paper mills, predatory journals, citation manipulations, peer review biases and paper retractions are alarming and are on the rise. The blame for this cannot be thrust on some individuals indulging in malpractice. This is the path the system is bound to take by the principles of human behaviour.  The structure and working of academia pretends that human behaviour does not exists, there are only ideals and goals. An academic system that ignores human behaviour can never work because the epistemology engine runs entirely on the human fuel.

Interestingly, many researchers are working today on aspects of human behaviour, behavioural economics, behaviour informed system design or behavior based policy. This is a thriving field. Even noble prizes have been given in behavioural economics, for example. All this is potentially relevant to academia but researches in these fields avoid talking about the design of academic systems. The academic system is the nearest, most accessible and most relevant system to be studied. This is the second important reason why studying proxy failure in academia needs to be prioritized. However, research addressing behavioral aspects of academia is scanty and fragmentary and not yet even close to addressing the haunting questions at a system level. What academia have at the most done is having an office for monitoring research ethics, which hardly appears to prevent misconduct. Unless researchers address the issues of behaviour based system design in their own field and come out with sound solutions; unless they redesign their own systems to make them behaviorally sound and little prone to proxy failure, unless they are able to minimize flaws and make the system work smoothly towards the goals, why should other fields follow their advice to redesign their systems? When I read anything about behaviour based policy, the natural first reaction of a citizen like me working outside mainstream academia is “Researchers, mend your house first!!”

खिडकीतलं कलिंगड आणि वन्यजीवन

काही वर्षापूर्वी एक गमतीशीर बातमी व्हिडिओसकट सर्वत्र प्रसृत झाली होती. एक छोट्या स्टेशनवर एक आगगाडी अगदी थोड्या वेळासाठी थांबली. फलाटावर एकजण कलिंगडे विकत होता. स्वस्त लावली होती. समोरच्या डब्याच्या खिडकीतून एकानी ती पाहिली. खिडकीतूनच पैसे दिले आणि दोन्ही हात बाहेर काढून एक मोठं कलिंगड घेतलं. तेवढ्यात गाडी सुटली. आता त्याच्या लक्षात त्याची चूक आली. खिडकीच्या गजातून कलिंगड आत कसं घेणार? एवढं मोठं सुंदर फळ अगदी स्वस्तात मिळालं खरं पण आता ते टाकवतही नाही आणि आत घेऊन त्याचा आस्वादही घेता येत नाही.

अनेक धोरणं आणि योजना पुढचा विचार पुरेसा न करता राबवल्या गेल्या तर त्या खिडकीतल्या कलिंगडासारखी अवस्था होते. म्हणावं तर त्या योजनेला यश मिळाल्यासारखं दिसतं पण त्या यशानीच अनेक समस्या निर्माण करून ठेवलेल्या असतात. त्यातून सुटण्याचा मार्ग दिसत नाही. आज भारतातील वन्यजीव संरक्षणाचं धोरण अशाच अवस्थेत अडकलं आहे. विसाव्या शतकाच्या मध्याला जंगले आणि वन्यप्राण्यांची अवस्था फारच वाईट होती. अनेक जाती नामशेष होतील अशी भीती निर्माण झाली होती. बेसुमार विध्वंस चालला होता. त्याला ताबडतोब आळा घालण्याची नितांत आवश्यकता होती. १९७२ चा वन्यजीव संरक्षण कायदा ही वेळेवर केलेली आणि बऱ्याच अंशी चांगली अंमलबजावणी झालेली मोठी कृती होती. हे करत असता त्यात चुका झाल्या नाहीत, कुणावर अन्याय झाला नाही, भ्रष्टाचार झाला नाही असं काही म्हणता येत नाही. तरीही मूळ हेतूला धरून अनेक बाबतीत यश मिळालं. चांगली अभयारण्ये राखली गेली. त्यांच्या व्यवस्थापनाची एक पद्धत तयार झाली. अनेक प्राण्यांच्या जाती नामशेष होण्यापासून वाचल्या, त्यांची संख्या आधी स्थिरावली मग वाढूही लागली. माळढोक, गिधाड अशा थोड्या बाबतीत यश नसेल मिळालं, पण बाकी यशाच्या कहाण्या खूप आहेत. लोकांमधे मोठ्या प्रमाणावर जागृती आली. वन्य पशु, पक्षी, वनस्पतींबद्दल प्रेम निर्माण झालं. तरुण पिढीत अनेक निसर्गप्रेमी आणि अभ्यासक निर्माण झाले. त्याखेरीज केवळ मजा म्हणून अभयारण्याला भेटी देणाऱ्यांची, छायाचित्रे काढणाऱ्यांची संख्या वाढू लागली. त्यात मोठ्या प्रमाणावर पैसाही आला.

थोडक्यात म्हणजे वन्यजीव संरक्षण संवर्धन योजनेला खूप मोठं फळ पटकन म्हणजे फक्त पन्नास वर्षात मिळालं. आणि आता हे फळ एवढ्या वेगानी मोठं होत आहे की त्याचं खिडकीबाहेरचं कलिंगड झालं आहे. अभयारण्यातील प्राणिजातींची संख्या वाढून आता त्या इतरत्र वेगाने पसरू लागल्या आहेत. पन्नास वर्षात म्हणजे रानडुकरासारख्या प्राण्यांच्या तीस चाळीस पिढ्यांमध्ये, गव्यासारख्या प्राण्यांच्या दहा-बारा पिढ्यांमध्ये कुणी शिकार पहिली नसल्यामुळे माणसाची पहिल्यासारखी भीती राहिली नाही. त्यामुळे आता ती सर्रास शेते तुडवीत पिकांची मेजवानी करू लागली आहेत. त्यांना हाकलण्याच्या, शेत राखण्याच्या प्रयत्नांना दाद देईनाशी झाली आहेत. एकेकाळी वन्यप्राणी जंगलात राहत असत आणि रात्री बाहेर पडून जंगलाच्या आजूबाजूच्या शेतांमधे चरत असत. लवकरच त्यांनी शेती बागायती मधेच पिल्ले घालायला सुरुवात केली. आता त्यांच्या कित्येक पिढ्यांनी जंगलं पाहिलेलीच नाहीत. अभयारण्यामधल्या प्राण्यांची संख्या किती आहे याचा थोडातरी अंदाज घेतला जातो. अभयारण्याच्या व्यवस्थापनाची थोडी तरी शिस्त असते. आता अभयारण्यांच्या बाहेर किती प्राणी आहेत, त्यांची संख्या किती वाढतेय याची कुणाकडे गणतीच नाही. तसा काही प्रयत्न करावा असा विचारही केला गेलेला नाही. त्याचं काही व्यवस्थापन करण्याच्या आपण जवळपासातही नाही. प्राण्यांमुळे शेतीचं, इतर मालमत्तेचंही किती नुकसान होतं याची नीट आकडेवारी कुणाकडेही नाही. नुकसान भरपाई द्यावी हे तत्त्व कागदावर मात्र आहे पण ते अव्यवहार्य आहे. ते प्रत्यक्षात यायला हवं असेल तर त्याचीही एक व्यवस्थापन पद्धत पाहिजे. अशी काही सोपी, न्याय्य आणि व्यवहार्य  व्यवस्था निर्माण केली गेलेली नाही. त्यामुळे आता ही समस्या खरं तर हाताबाहेर गेली आहे, राज्याच्या मोठ्या भागात शेतकऱ्यांचं जगणं मुश्किल झालं आहे. एका अभ्यासाप्रमाणे महाराष्ट्र राज्यात वन्य प्राण्यांमुळे होणारं शेतीचं प्रत्यक्ष आणि अप्रत्यक्ष नुकसान वर्षाला पंचवीस हजार कोटींच्या घरात आहे. त्यापैकी फक्त शंभर कोटींच्या आसपासात नुकसान भरपाई दिली जाते. नुकसान झालं म्हणून तक्रार करण्याची पद्धत अव्यवहार्य आहे आणि नुकसान मोजावं कसं हे कुणालाच कळत नाही. त्यामुळे शेतकऱ्याच्या हातात काहीच पडत नाही. असे एकदोन अनुभव घेतल्यावर शेतकरी तक्रार दाखल करायलाही जात नाही. कागदोपत्री प्रत्यक्ष नासधुसीच्या फक्त एखाद्या टक्क्याचीच नोंद होते. त्यामुळे समस्या मोठी आहे हे प्रत्यक्षात जंगलं आणि शेतीविषयक काम करणाऱ्या सगळ्यांना मनोमन माहिती असलं तरी खुर्चीत बसलेल्या बाबूंना ते दिसत नाही. गेल्या दोन तीन दशकांचा अंदाज घेतला तर होणारं नुकसान दर पाच ते सात वर्षात दुपटीने वाढत आहे. जिथे जवळपासातही कुठलं अभयारण्य नाही तिथल्या शेतकऱ्यांनाही आता ही समस्या भेडसावू लागली आहे. हा चढता आलेख आणखी दहा वर्षे असाच राहिला तर या एकाच कारणामुळे राज्याच्या कृषीउत्पादनात फार मोठी घट येणं अपरिहार्य आहे.

गेल्या पाच वर्षात यात एका नव्याच समस्येची भर पडली आहे. ती म्हणजे वाघासारख्या प्राण्यांकडून माणसावर होणारे हल्ले. प्रत्यक्ष मारल्या गेलेल्या माणसांची संख्या फार मोठी नसली तरी त्यामुळे लोकांच्या मानसिकतेवर झालेला परिणाम फार मोठा आहे. रात्री पिकाच्या राखणीसाठी शेतावर जाऊन बसायचं हे शेतकऱ्यांना पूर्वीपासून करावं लागतं होतं. पण अनुभवी शेतकरी सांगतात की पंचवीस तीस वर्षांपूर्वी जरा हुर्र केल्यावर जे प्राणी पळून जात असत ते आता चांगलेच धिटावले असून आता फटाके वाजवून सुद्धा पीक वाचवणं मुश्किल होऊ लागलं आहे. त्यातून आता वाघ, बिबटे, अस्वलांची भीती निर्माण झाल्यामुळे रात्री राखणीला जाणेही धोक्याचे वाटू लागले आहे. राखण केली नाही तर एकरच्या एकर पीक रानडुकरांचा किवा नीलगायींचा एक कळप एका रात्रीत सफाचट करू शकतो. त्यामुळे वाघाकडून प्रत्यक्ष मारल्या जाणाऱ्या माणसांपेक्षा वाघामुळे शेती हा जगण्याचा आधारच नाहीसा होण्याचं भय कित्येक पटींनी मोठं आहे. आणि या सगळ्याचं प्रभावी व्यवस्थापन करण्याचा कुठलाही विचार सुद्धा केला गेलेला नाही. अंमलबजावणी तर दूरच. हा प्रश्न भावनेचा नाही तर व्यवस्थापनाचा आहे.

चालत्या गाडीमध्ये खिडकीच्या गजातून ते कलिंगड आत घेता येत नसेल तर दोनच उपाय राहतात. एक तर त्या कलिंगडाचा मोह सोडून ते टाकून देणे किंवा खिडकीचे गज कापून ते आत घेणे. त्यापैकी वन्यजीव संरक्षण-संवर्धन सोडूनच देणं योग्य नाही. पण ज्या कायद्याच्या चौकटीचे हे गज बनले आहेत त्या १९७२ च्या वन्य जीव विषयक कायद्यात मुलभूत आणि उपयुक्त बदल अभ्यास आणि संशोधनाच्या पायावर करणे हा एकमेव उपाय आहे. हे करण्याची प्रक्रिया ताबडतोब सुरु करून लवकरात लवकर पूर्ण केली नाही तर अख्ख्या देशात शेती-अर्थव्यवस्था ढासळणे, शेतकरी भिकेकंगाल होणे  आणि त्याचा शेवट समजात वाढती विषमता, दुफळी आणि  गृहकलह माजण्यामध्ये व्हायला फार वर्षं लागणार नाहीत. निसर्गाचा नाश केला तर माणसाच्या अस्तित्त्वावरच गदा येऊ शकते अशी भीती नेहमी व्यक्त केली जाते. ते बरोबरच आहे. पण माणसाच्या अविचारी निसर्गप्रेमामुळेही समाजाचा आणि अर्थव्यवस्थेचा  विध्वंस होऊ शकतो याकडे कानाडोळा करता येणार नाही.  

A reason to welcome AI in science publishing:

More and more concern is being raised about the problems in academia that are rapidly expanding both qualitatively and quantitatively. Hardly anyone will disagree that there is a reproducibility crisis, increasing frequency of frauds and misconducts at every level. The burden of APCs is destroying the level playing field (if there was any) so that only the rich can publish in prestigious journals.  The bibliographic indices have almost taken away the need to read anything, because the importance of any piece of work is gauged by the journal impact factor; the performance of any researcher by the number of papers. So nobody reads research papers anymore. Citing them does not need reading them anyway. The rapidly changing picture in academia is a perfect case of proxy failure where proxies have completely devoured the goal of research. Now asking novel questions, getting new insights and solving society’s problems is no more the goal of research. Publishing papers in prestigious journals and grabbing more and more grants is. With this a downfall of science is bound to happen, and actual downfall that has already begun is also well demonstrated by many published studies.

An additional serious concern now is AI. Of late so many researchers are using AI to write papers whose apparent quality of presentation is often better than what researchers themselves could have written. At present AI tools have many obvious flaws and they get caught red handed quite often. Incidences in which hallucinating AI cited references that did not exist have come to light. But soon AI will evolve to be better and then it will make it harder to detect flaws. In response to the first wave of AI generated papers, some journals banned them, but soon implementing such things will become impossible. An arms race of smarter frauds and smarter whistleblowers is not exactly going to be good for science.

Who will benefit the most from the more refined AI tools? Certainly the people involved in research misconduct because the frauds will become increasingly smarter and more difficult to detect as AI gets smarter.

And precisely for this reason I would welcome the use of AI in science publishing because it can get us out of the mess created by us over the last few decades. The mess has been created by the ‘publish or perish’ narratives that nurtured bad incentives. The set of bad incentives gave rise to the journal prestige rat race, the citation manipulation practices, the predatory journals as well as the prestigious robber journals exploiting the researchers’ desperation to publish. AI will help us come out of this mess not by smarter detection of fraud and misconduct, but by enhancing it and making it more and more immune to detection.

It is already become a common knowledge that many papers are being written with substantial contribution from AI. There is yet to be any example where a deep and disruptive insight is contributed by AI. The limiting factor in AI generated science is still going to be scientists who will decide to accept or reject what AI output is saying. If AI gives an output that goes against the current beliefs and opinions in the field, it is most likely to be rejected saying that sometime AI throws up junk (which might be true at times, but who knows) and we need not take every output as true. So AI will make normal routine and ritualistic science more rapid. It will give more easily and efficiently what people in a field are already expecting. But I doubt whether it will be of any help in a Kuhnian crisis.

But AI will tremendously help those who want to strengthen their CVs deceptively by increasing their number of publications, and blowing up citations. This trend will increase so sharply that it will soon collapse under its own weight. As writing papers become easier, their value in a CV will fall rapidly. Institutions will have to find alternative means to “evaluate” their candidates and employees. The clerical evaluation based on some numbers and indices will become so obviously ridiculous that it will have to give way to curious and critical evaluation which is not quantifiable and which cannot be done without efforts and expertise. Critical thinking, disruptive ideas, reproducibility will have to come to the front seat and replace bibliographic indices. Nothing can be more welcome than this. If the importance of number papers published and citation based indices vanishes, most bad incentives for fraudulent practices will vanish in no time.  Paper mills and peer review mills will collapse. There can no more be profiteering either by predatory or by robber journals. The edema of science will vanish because it will be no more confused with growth. So let AI disturb, disrupt and destroy the mainstream science publishing systems and that is my only hope to save science from its rapidly declining trustworthiness.

This will certainly happen but not very smoothly. There will be a decade or more of utter crisis and chaos, which has started already. The entire credibility of academia will be at stake. People will cross question the very existence of so many people in academia drawing fat salaries and contributing nothing to real insights.  If academics are prudent, they should start the process of rethinking sooner to shorten the period of crisis and chaos. Vested interests of publishers and power centers in academia will not allow this to happen so easily. There attempt will be to keep things entangled in the arms race. But there still are sensible people in science, aren’t they? The only thing sensible is to allow the current system of science publishing to get crushed under the weight of the AI assisted edema and then make a fresh beginning where HUMANS and not some computed indices make value judgments, identify, attach and appreciate real and insightful contributions to science and make a community of innately curious and investigative minds with no added incentives and rewards. Then let them take help of AI for anything. When the human rat race has vanished, AI will be tremendously useful for its positive contributions.

Covid-19 pandemic: What they believe and what data actually show

With one paper published today, we published a series of 4 papers on the epidemiology of the pandemic examining what everyone has been generally believing. There are some expected but many surprising outcomes. This is what the four papers summarily say

  1. Much before vaccines came in, the virus had already started losing its virulence rapidly, that continued despite some temporary hick ups. Today it is not much different than flue or common cold. After examining several alternative reasons for this trend we showed that largely evolution of the virus has led to loss of virulence. Vaccines played only a marginal role in this.
  2. Lockdowns and other restrictions were not at all effective in controlling the rate of transmission in the long run.
  3. Small increments in immunity because of subclinical infections and exposure to frequent but small doses of the virus played a major role in shaping the epidemics. All waves were substantially dwarfed than what all models predicted. This was not because of lockdowns, it was because of the small immunity increments conferred by the frequent small exposures. Carefully comparing the testable predictions of the two possible causes unanimously support the small immunity effects model and reject the lockdown model.
  4. The repeated waves were not caused by new variants as most commonly believed. Forget about causation, there is not even statistically significant association of new variants with new waves. The repeated surges were caused by rapidly declining immunity. At times a new variant successfully rode an upcoming wave. It was a consequence, not the cause.
  5. The succession of variants was not mutation limited. The pattern shows clear signs of selection limited evolution.

Links to the four publications showing the above are here

https://peerj.com/articles/11150/

https://www.currentscience.ac.in/Volumes/122/09/1081.pdf

https://doi.org/10.32388/LLA6AO

https://link.springer.com/article/10.1007/s12038-023-00382-y

In addition to the published work, there was more analysis and more inferences that remained unpublished, but some of them were presented in a couple of meetings. There is much more in the data that can be explored with new questions. This can be good exercise for students learning public health or statistics or simply learning science. I will illustrate this with a couple of examples below.

The graph shows the appearance and invasion by successive variants in UK, each variant in the proportion of samples sequenced being shown by a different color. The lower half shows the incidence surges, on a date matched scale. By the fundamental principles of statistics one should start with the null hypothesis that emergence of a new variant and the waves of transmission are independent of each other. Because there are so many variants coming up, a new wave may coincide with some upcoming variant by chance alone. Unless this null hypothesis is rejected, one cannot conclude that a wave is associated with a particular variant. This can be done for all countries where there is sufficient variant data. Believe me, this exercise has never been done in published literature. The conclusion that new variants cause a wave was reached just like that, without the minimum required statistical analysis. This is still untapped and open to be analyzed by someone.

Figure 1: The succession of variants each shown in a different color and successive waves of epidemic. What null hypothesis will be appropriate to show significant association of a wave with a variant?

See another example. Was the vaccine responsible for making the infection mild? This question is difficult to answer because of natural immunity being the confounding factor. But Australia and New Zealand offer an opportunity. After an initial surge, these two island countries were successful in keeping free of infection for almost a whole year. Then the infection re-entered and spread widely. But by this time vaccines had reached. About 80 % population got vaccinated and saturated there for multiple reasons. It did not climb to 100%. The mortality in the first wave of March to Sept 2020 was 30 to 35 times higher than that during Aug to March 2022. Because there was practically no natural infection for about one year and immunity is now known to be short lived, we can assume little natural immunity in this population at the beginning of the second wave. Did vaccine cause the 30-35 fold difference? This is impossible because only 80% population was vaccinated. If we go by a limiting assumption 1 (model 1 in the figure) that vaccination completely eliminated mortality in the vaccinated but did not prevent infection, then only a 5 fold difference is expected because 1/5th population remained unvaccinated. The remaining 6-7 fold is not explained by vaccine. If we make the other extreme limiting assumption (Model 2) that the vaccine prevented infection with 100 % efficiency and only the unvaccinated got infected in the second wave, then mortality among the infected shouldn’t have reduced at all, if vaccine was the cause of decreased mortality. The reality should lie somewhere between the two limiting assumptions. This means vaccines explain somewhere between zero and 5 fold difference out of the 30-35 fold difference.

Figure 2: The difference between CFR in the first wave and second wave on a log scale. Gray represents the maximum difference explained by vaccines in models 1 and 2 respectively. The rest is likely to be explained by virus evolution.

I wonder why nobody sees such patterns, nobody challenges the religious like beliefs in the field of public health. This must be because we never teach science as a way to test, examine or challenge a hypothesis. We teach science as a religion.

There are more questions that can be asked with the available data and anyone can do this. I recently announced on my social media accounts that I would welcome any one to join me in addressing many such questions. No qualification requirements other than genuine interest. I received an amazing response. So much that I have to rethink about how to handle this. But I am sure, if I can motivate at least a handful of them so much useful science can be done and so many beliefs re-examined, not only in the pandemic data but in so many other problems of the society that need attention.

A solution to the reviewer problem

Not getting anyone agreeing to review is a growing concern of journal editors. Often editors have to send requests to 10-20 potential reviewers so that one or two agree; keeping the time commitment, the repeated reminders required apart. Editors therefore have a tough job. They make it easy partly by making the invitation and reminder process automated. But that doesn’t address the root cause of the problem.

We had an interesting experience over the last couple of years. For one of our papers, three journals one after the other, took substantial time each with no progress shown on the online updates. After substantial delay, (6 months in one case) they returned the MS saying that they did not find any editor/reviewer agreeing to handle/review the MS. The 6 month delay case was with PLOS One which actually has a large network of editors and reviewers from different fields. I am sure they tried their best to find an appropriate person to agree, but with no result. This is just one example. Experiences across the board show that the problem of not getting reviewers is genuine and widespread.

Then something unexpected (not unexpected for me though!!) happened to our paper. After spending almost two years, trying out many journals and not getting a single review, we decided to publish it in the open peer review journal Qeios. The journal has an AI system to send requests to reviewers in the field. In addition, any reader is welcome to post comments. All comments get publicly posted immediately. The author responses and revisions are also open in the public domain. On posting the paper on this platform something miraculous happened. Reviews started flowing in within a couple of weeks. Today after about 7 weeks, there are 16 reviews received. I just completed replying the reviewers and posting a revised MS. This response contrasts the prior experience of not getting a single review in two years in multiple journals in the prevalent system of confidential peer reviews. So the contrast is between zero reviews in two years versus 16 reviews in 7 weeks.

How does the quality of reviews compare? No comparison specific to this paper is possible because there was no review in the traditional system. A systematic comparative analysis across the two systems with sufficient sample size is not possible unless some of the traditional journals make their reviews public. The traditional journals just want to avoid getting exposed so they will never do this. So I can only talk anecdotally. In my experience with prior publications, there is no difference between the two in the average quality of reviews. On an average both are equally bad. A minority of reviews are really thoughtful, rigorous, appreciating and critical on appropriate issues and therefore useful to improve the quality of the paper. For the reviews on this paper I would say a greater proportion of reviews turned out to be insightful and useful for revision. But there were poor quality reviews as well. Here is the link to the revised paper, all the reviews and replies (https://www.qeios.com/read/GL52DB.2). Readers are welcome to make their own opinion.

Talking about  my sample size of a few hundred reviews, majority of reviews have been very superficial, saying nothing critically about the central content and making a few suggestions like improving figure 3b or making the introduction more concise. Some comments such as more clarity is needed are universally true and therefore completely useless unless some specifics are pointed out. A comment about language correction is so common. When the author names sound non-British, a suggestion to get the manuscript refined by a native English speaker has to be there.  I tried taking help from professional language service and even after that this comment had to be there. Some reviews are entirely stupid and irresponsible. Precisely this entire range is seen in the reviews received in open peer review system. But the proportion of sensible reviews appears to be slightly greater in open peer review system.

Why is it that conventional journals found it impossible to get reviewers and for the same paper open peer review system had so many reviews in a short time? I can see two distinct but interdependent reasons. One is the accept reject dichotomy. Any kind of debate, differences of opinions and suggestions are useful and important for science. But a reject decision stops this process. The accept-reject dichotomy has actually defeated the purpose of peer reviews. The Qeios system takes this dichotomy away. One of the beliefs behind confidential peer reviews has been that by being anonymous reviewers can avoid the rage from authors and the resultant personal bitterness. The scientific community actually welcomes debates of any kind. They are irked by rejection which takes away the opportunity to debate. Here the authors are always given the freedom to respond and therefore reviews do not end up irritating the authors despite critical comments. Reviewers are not afraid of spoiling the relations and a healthy debate is possible. I suspect that if the social fear is taken off, reviewers actually like to publish the comments with their identity disclosed. The second factor contributing to reviewers’ willingness is that they get a credit for their review and a feeling of participating in the refinement of the paper and thereby progress of the field. This is the true and healthy motivating factor. Other suggestions such as paying the reviewers are unlikely to have the same motivational effect. Reviewers actually seem to like the idea of their comments getting published. This I think is the reason why we received 16 reviews for a paper that did not get a single reviewer in confidential review system.

A promising inference coming from this experience is that there is a simple solution to the problem of reluctance of reviewers and that is to remove confidentiality, discourage anonymity and make the peer reviews public. If accept reject decisions are necessary at all, let a discussion between the editor and authors decide it. Reviewers need not give any accept-reject recommendation. They only write their critical views. If the reviews expose fundamental flaws in the manuscript, authors themselves would like to either remove them or withdraw. If they don’t, their reputation suffers because the flaws pointed out by the reviewers also get published along with the paper.

All this can work only with the assumption that there are readers that actually read the papers and the comments as well. About this I am not so sure or hopeful. The culture of making a judgment without reading has gripped the entire field very tightly. I can only hope that when the reviews become open, readers will stop confusing between peer review and validation. Reader will stop relying on the blind faith that reviewers have done a good job and what they read now is reliable and true. Instead, readers would use their own judgment aided by the peer comments and as a result the reading culture will have to improve. If the entire community continues to make some quick judgments only based on the name of the journal, at the most reads only the title and abstract and feels no need to read further, then only God can save science.

My best is yet to come

Many years ago, just past my mid career, someone asked me, “When do you think was your best time in academia?” I replied in less than a second, “I think, my best is yet to come”. We talked further on this. The belief behind her question was that in any creative person’s life there is a relatively short period of very high creativity or valuable output. It might be just a stroke of luck or perhaps that much creativity cannot be sustained lifelong. There are many examples of this in the history of science as well as arts, she said.

I do not know whether this is generally true or not, but if I ask the same question to myself, my answer would still be the same even after retirement. My best is yet to come. What I mean “best” here is in terms of understanding, creativity, disruptive thinking, innovativeness, curiosity and productivity in science. I can actually feel it growing rapidly after retirement, despite many limitations such as no funding, few students around to work with, some inevitable loneliness in work and so on. Perhaps even more serious limitation, in the view of those who believe in it, would be that I won’t be able to publish anymore in journals having unreasonable author charges as I have no money to pay.

What is getting better and why? First of all, any academic rituals such as PhD no more interfere in my research. The invisible peer pressure which restricts your direction of thinking is no more felt. No career worries exist for me or for any one working with me. After being free of academia, my output actually increased instead of decreasing. Not only in terms of the number of papers but also the diversity of problems addressed, the depth of work (if not the volume), the challenges posed to prevalent beliefs, the cleanliness of arguments and so on. Out of my lifetime work, if I rank the quality of my own papers, many of the topmost came during the last five years after having quit academia. So at least in my experience, academia created more hurdles in my path than helping out. The only true help of academia was of course the salary received. It was so much in excess of my needs that it let me save and invest in such a way that now I can continue to work without any salary. At times even support a needy student from my pocket.

My perspective of science is encompassing new dimensions that we never learnt as students and ignored as researchers. Science is not only about questions, hypotheses, experiments, data, analyses, inferences and all. It is fundamentally a highly complex and continually evolving behavioral and social process that cannot be separated from the core principles of science. In fact visualizing the principles of science independent of the behavioral and social dynamics is in itself incomplete and flawed. Even in behavioral sciences training this is seldom discussed. Academia is the biggest challenge to behavioral science. This thought is not entirely new. Behavioral scientists have looked at and continue to study the knowledge process. But the angles covered so far are too few and narrow. Much of the complexity remains unexplored. I think I can visualize some of these unexplored angles better now and will keep on studying those in depth.

There are so many missing elements in the methods of science as well. People continue to work with several flaws in experimental designs and the flaws are never pointed out. The logic behind establishing a cause effect relationship is still too primitive and has many unexplored principles. At times the principles have been stated somewhere sometime but most experimenters are unaware of them. As I realize more and more of this, my understanding of science gets deeper and I wonder why nobody taught us these things in science curricula or during the research training?

I am also realizing how history, not only of science but that of economics and politics as well affects the research approach and methodologies. Science historians have not used the present enough to study history. History of the present is a big and unexplored field that reveals so much of the subtle social behavioral processes in research. Again owing to historical and ideological reasons meta-science or the science of science has locked itself in a narrow vision that it is missing a lot. I think I can see at least some of the missing stuff clearly.

I never saw the field of science with as much clarity as I have now and it is only increasing day by day, going much beyond what we were made to believe, revealing the naked realties. This should result into visualizing better design for academia. I know that nobody will listen to me, and I also know who will resist any change in academia and why. But human behavior compatible sound systems of knowledge generation and education need to be designed and only someone thinking about behavior needs to take the initiative. Everyone knows that the growing research misconduct, data fabrication, paper mills, biased peer reviews, extorting system of science publication, unscientific selection and evaluation systems all arise from bad design of academic structure. You cannot blame someone, punish someone and expect that it will stop. The academic systems need a complete revamp. But how the structure should ideally (not ideologically) be, no one is even talking about. This is the question which has become a priority investigation for me.

I do not know whether anything I study and write will be of any use to the world, but I am sure my science vision will get deeper, clearer and more useful at least to me day by day. In this sense my best is indeed yet to come!!

How to stop the progress of science

“I realize that in effect, I have done a lot of harm to science.”

I am personally responsible for arresting or at least creating hurdles in the progress of multiple fields of science and I proudly take the blame on myself. Any curious person working outside the temples of greater gods of science can easily put hurdles in the path of research and all he/she has to do for this is to do good research.  Good research by a lesser scientist stops the progress of science.

Sounds strange? Yes, it is weird, but the weirdness comes from the weird structure of academia; especially the career paths. As I have said many times, doing good science and doing a successful career in science are two independent things, not correlated too well. In fact there are certain trade-offs between the two. You need to compromise on at least a few aspects of science if you want to build a great career in the present day academia. The way the career path creates hurdles in science is not difficult to perceive, but is generally not realized and not said explicitly.

What motivates a young researcher to investigate a certain field? It should ideally be curiosity, a troublesome question, a serious problem faced by the society and the like. The factors that actually decide what a career minded researcher decides to work on are: good chances of getting results, the potential to publish in good journals, the possibility of being the first in discovering or achieving something etc. The two might overlap at times but more commonly stand in contrast and in conflict as well.

On the other hand are the biases in peer reviews and the journal prestige. Young researchers in more reputed institutions have better chances of publishing in high ranking journals provided they meet certain criteria. They tend to chose what to work on based on the chances of fulfilling these criteria. Any replication of a new result is important for science but is unlikely to give a high impact paper. Replicating experiments does not improve career prospects. Therefore testing the reproducibility of any experiment is not the preferred line of work by ambitious researchers. The entire reproducibility crisis is because of this factor.

In contrast, novel concepts or path breaking research is not expected from children of the lesser god. Even if they do so, they can’t publish in prestigious journals since the most likely fate is desk rejection by only looking at the name of the country, university or institution. They are most likely to end up publishing in low prestige journals. As a result their work is hardly read and cited by anyone, citing a paper from a lesser journal is below dignity for the elite researchers.

So what happens when someone from an obscure background publishes a real break-through or opens up potentially a new line of work?

It is simple. That line of work will never progress. There are three ways in which this new line could have progressed. One is that the person gets sufficient funding to further the work he/she pioneered. Second could be that someone from the elite class recognizes the importance of the concept and takes an initiative to collaborate. The third, someone from the elite class develops the same thinking independently and takes it ahead with or without giving due credit to the third world scientist. In the last case there might be injustice to the third world scientist but science does progress. Even in the second case the elite may carry forward the work without collaborating or giving credit to the original discoverer. This again is unfair but science would progress nevertheless. That’s not so bad in my view. There are multiple examples of this in the history of science.

In today’s world of rapid literature accessibility, none of this is very likely. The poor original discoverer will not get funded because the concept does not come from the elite. Their work will not be taken seriously in the field since it is published in a lesser journal. Even if an elite researcher thinks of the idea independently, is excited by it and is convinced that this can bring in revolution, but discovers that someone has published it already, will not take it forward because that will not give a big career boost to him/her. The net result is that all the three paths of progress are blocked and this line of work gets arrested in spite of its potential in giving revolutionary insights.

This is what happened to my science throughout my life. I was not career conscious, but merely fond of ideas. I pursued a number of novel ideas in diversity of fields, showed their mathematical and logical soundness, supported them with evidence, primary experiments, secondary data, made testable predictions some of which accidently got tested and found support. Some of them have the potential to give a complete new turn to the field, open up new lines of thinking. But nothing of this happened or is likely to happen in near future. Nobody found any flaws in my arguments, nobody doubted their validity and relevance to the field, I came to know that some giants in the field were quite aware of what I published and appreciated it in private. Still nothing happened further on these lines. I never got funded to continue work on my own idea. (I did get huge funding at times but that was only when I towed the line of a giant in the field.) I could not publish my original ideas in the top ranking journals because they were declined every time without review. Some of the ideas were very obvious but just a little ahead of time. So I can’t believe that no one else thought of these any time independently. But once I published first, there was no body interested in them.

I realize that in effect, I have done a lot of harm to science. Anyone working outside the mainstream community and doing good science actually creates hurdles in the progress of science. Whom shall we blame for these hurdles? It seems to be a crime to do creative research, generate novel and sound ideas outside the main stream and someone like me is a criminal in the field.

My only true peer review experience

Transparency of methods, data, analytical tools, programs used is extremely important for science. This is well recognized and many journals ask many questions to authors to ensure transparency even before processing a manuscript. Ironically peer reviews, which form the backbone of today’s science publishing, are not transparent and accessible to anyone. I haven’t seen a bigger contradiction in my life.

Recently I uploaded two papers on the open peer review journal Qeios. One titled “Behavioural optimization in scientific publishing (https://doi.org/10.32388/8W10ND.3)” and another more recently uploaded “Evolution of new variants during the SARS-Cov-2 pandemic: mutation limited or selection limited? (https://doi.org/10.32388/LLA6AO)”. The first one, in less than one month has received 13 reviews (in addition two elaborate responses to me on email) and has undergone two revisions. This response is unique in my experience, both in terms of promptness as well as rigor. The most common scenario with conventional journals is that most reviewers respond only after multiple reminders. It is obvious that many of them have not read the complete manuscript. They would raise questions which are elaborately answered already in the paper, would make sweeping statements without giving a single reference, or suggest citing references that are not relevant. In contrast, this paper received both appreciation and detailed critical comments. Differences of opinion were expressed without hesitation, but there were no derogatory remarks that I have seen so commonly in confidential peer reviews.

I spent a total 7-8 full days in reading, preparing detailed responses and revising twice, which I fully enjoyed. With one person I had an online discussion as well. Numbers are not enough to reveal the real fun of open peer review. The mindsets during the interactions were unique. In conventional peer reviews, while the authors are revising or replying to comments, scientifically sound arguments are not enough. They have to worry about satisfying the reviewer along with his preconceived notions, prejudices and egos. There is no choice but to please the reviewers. I have experienced this from both the ends and have written about it earlier (https://milindwatve.in/2020/09/17/how-peer-reviews-are-degrading-the-spirit-of-science/). This time I found myself only committed to logical soundness of arguments without worrying about the interpersonal complexities. Wherever we differed I had no hesitation saying that we differ on this issue. I had the same feel about the interacting reviewers too. This is great. This is a real platform for science.

This anecdote strengthens my notion that open peer reviews will improve the quality of reviews substantially. There has been a debate about open peer reviews multiple times on different platforms. Many people fear that this will increase the responsibility of reviewers and therefore they will be even more reluctant to review. We already have dearth of reviewers. Does 13 reviews and 2 revisions within one month support this fear? There were worries expressed that reviews will not be critical enough. Just read the reviews to this paper to see whether that is true, they are all there for everyone to read on the same link.

The most important point is that anything that is not transparent is NOT science. I have over 200 experiences of receiving peer reviews, dozens, if not hundreds of peer reviewing for journals until I started replying to review requests that I will accept only if you make it public. This is not a small sample size. I received high appreciation as well as irresponsible and insulting remarks, highly entertaining stupidity and “oh, how come I didn’t think of this” feeling. But still I would say this is the first time in my life I thought I was doing science and only science. On all previous occasions there was a feeling of either facing an exam or standing in the court of law. The word “peer” indicates standing on level grounds. First time in my life I was really standing on level grounds and talking only of science. In this sense this is the first “peer” review of my life. All others felt like being a criminal under trial. My sincere request to all fellow researchers to experience this, from both the ends, as authors and as reviewers. Open peer review is the future of science, if science has to be science all the time. Otherwise the flaws, biases, power play, inequality, racism, discrimination, regional imbalance, exploitation, bad incentives, reproducibility crisis, research misconduct which is evidently growing in the field will end up with the entire field harboring only pseudoscience.

I know that open peer review journals will have tough time for quite a few years owing to the religious and aristocratic social structure of the scientific community.  Right now the community is deep in a pit of the illusion of journal prestige and CV building. The culture of evaluating one’s research without reading it has created a trap for everyone. Anyone with career worries will not be able to do open science for the fear that any attempt to do so will ruin my career. I must only try to publish in high impact journals. This cowardice will prevent science from being science for a long time, but I am an optimist. Science will certainly become true science some day.

Salary and intellectual property: the Indian problem

When a team of researchers is employed by an agency such as an institute or a company, and they find something worth publishing or patenting, whose intellectual property is it?

By the norms followed the world over, it is the property of the agency by default. Other types of understandings are off course possible but that needs to be clarified in the employment contract or the appointment letters. If not, the agency has the right to decide the norms by which credits and any other benefits coming from the intellectual property can be shared and distributed among individuals participating in research. Generally this goes well.

But there is an unusually complex situation in Indian academia. Quite often the fellowships or salaries of the students, project assistants, laboratory assistants and other project related staff are not paid for quite a long time. This can range from a few months to sometimes 2-3 years. The causes for this lie in the inefficient and careless handling of the funding and related paper work by the funding agency and the research organization equally.  Interestingly the salaries of faculty, administration and other permanent personnel are almost never delayed. Often the funding agencies and the handling machinery is the same. But they are more careful about the permanently employed people and utmost careless about students and project staff. The reasons of course lie in the nuisance value. Permanent government employees have a history and tradition of having strong associations having fought for their rights and pay-scales. Students and short term project staff are the most unorganized sector workers and therefore get helplessly exploited in a number of ways and the higher ups in academia do not care to improve the situation.

However, the intellectual property angle of this problem remains unappreciated. The institutes hold the rights on intellectual property because they pay their employees regularly. But if that is not happening, the standard IPR norm collapses. Individuals who work without the promised pay can claim the first right on all the intellectual property generated during the time period of their work. The institute cannot claim it because it has not paid. Even if there is any kind of employment contract, failure to pay the salary in time is the breach of contract. If an agency is responsible for the breach of the contract, its right to claim any other benefits from the contract comes under question. The faculty, PIs of the project are getting their salaries regularly, so they cannot claim the IPR. Under such circumstances, the unpaid workers have the first right on IPRs. The institute cannot publish or patent the work or even include it in any of its official reports without the written permission of the unpaid workers. Research is a complex task and it is difficult to decide who contributed what. Often the persons actually doing the work know the subtleties of an experiment much better than their guides happily occupying their easy-chairs. Therefore the norms of IPR sharing need to be pre-decided. Violating these norms would potentially lead to unprecedented legal complications.

The simplest way to avoid any such complications is to pay all research personnel their due amounts in time. The reason this does not happen is lack of motivation to set the system right. There is no other reason that the funding system cannot correct itself. But as long as the institute does not lose anything by being careless, nobody will try to remove the system flaws. Therefore it is necessary that there are at least a few cases whether the unpaid researchers claim rights over the intellectual property and prevent the institute from using any of the research output.

The reason this has not happened so far is twofold. One is that this interpretation of the IPR norms is not known to anyone although logically it is quite straightforward. But even more important is that the sufferers do not have the muscle to challenge the system. They are in an insecure position and fear that the higher ups control all their career prospects. The academia is completely aristocratic in multiple ways. So most sufferers will not dare take any bluntly logical stand. But it is not impossible. I can imagine that someone has to leave when a project ends and there is no hope of continuing. For a person in such a position, there is no risk of losing anything anymore and such a person can block the entire output of the project that he/she was working and prevent the institute from using it in any way. Institutions and funding agencies need to be aware of this interpretation of the IPR law and correct the system in time without any excuse. If that doesn’t happen, the junior research personnel should come out and act. If not in the court of law, then they can come out in the public domain. Let people be aware of the reality. Let common man start asking questions to Directors, Chairs, Heads and PIs, “How many unpaid researchers work under you?” A whip from common man can be the ultimate motivator.

Advantages of open peer review

Recently I had an experience of an open peer review journal and it is certainly worth sharing.

As a student of behavior I have been thinking of the behavior of different players in the science publishing system and whether we can design a behaviorally sound system that would minimize biases , misconducts and irresponsible behaviors in the peer review system. I gave a talk on this in December 2017 and a preprint article in 2019. Interestingly in 2019 itself an open peer review journal called Qeios (pronounced like chaos) was started on very parallel (but not quite the same, some crucial components from my system missing) principles. Qeios started as a preprint repository but also has a peer reviewed publication system but with no dichotomous editorial decision involved. An AI system searches for and invites reviewers. Reviewers are informed that their reviews will be public. Authors can respond to them, improve their paper if needed but all these steps and stages will be completely transparent.

Any new journal has initial problems which this journal will also have for quite some time. Because of the impact factor illusion and the sheep mentality of researchers, a new journal is unlikely to get high quality papers in the initial years, which seems to be true for Qeios as expected. When I received a review request, I didn’t expect a high quality paper. What I received was not bad, it had some thought provoking ideas, but the work was not rigorous enough, being sort of aimless and not contributing any new and meaningful insights. At the same time it was Ok in terms of the model developed and some of the data and discussion.

Although I have been advocating open peer reviews for many years now, while preparing myself for an open peer review I realized something that I had failed to appreciate before. For a conventional journal I would have recommended rejection. I realized that it would have been a very dumb and non-productive end. The paper certainly had triggered some thinking in me. I thought of some new questions, some ideas, some tricky issues for the first time. It was not enough to make a full paper in itself, but it was worth something. If I expressed my doubts, my half baked ideas it would possibly stimulate someone else to think. They may have solutions that I couldn’t think of, or they might simply add more questions, may even point out that I was wrong. All this is a valuable process, not a product. But so far we were only publishing products and hiding the process. That is only half science. Open peer review journals can bring out at least part of the thinking process and much can be learnt from that. May be at a later stage I will return to my own arguments and develop them further. May be someone else does so. If I had recommended rejection, then nothing of this sort would have happened. Me too would have forgotten the issue in no time. Publishing the thinking process enriched me substantially. Of course not everyone will do this. There would be a lot of junk published. It is true that some people would only write goody -goody reviews and since it is published, add to their CV. This is happening. That might be inevitable garbage, but not a sufficient reason to block the valuable open thinking process. Here is the link to what I wrote as a reviewer (https://www.qeios.com/read/S390H3). The author responded to my comments, not with much rigor but that’s ok. Everyone has limitations. If it is in the open domain someone else can compensate for these limitations sometime and take the concept to a meaningful level. I hope people understand and realize the strengths of open peer review and adopt this practice increasingly. The starting problem of quality would vanish eventually.