Science and politics: The thin line

My differentiation criterion between science and politics is simple and clear. What is right and what is wrong is a scientific debate. Who is right and who is wrong is a political one.

Democracy is the leading norm in politics today although it is yet far from being universal. The structure of politics in democracy is such that although there can be public debates on various issues and two or more possible stands taken on any issue, what we ultimately vote is not any particular stand but an individual leader or a political party. This is inevitable because there are multiple issues and only one election. The election is between parties and leaders, not between alternative policies on every issue. It is possible that you agree with a given leader’s stand on certain issues but not on others. Then you need to prioritize, give more importance to certain issues and decide your vote accordingly. Here, we can say that it is because of the structure of the democratic process that who is right takes an upper hand over what is right.

By the principles of science, all debates should be about what is right and what is wrong. They are supposed to be driven by logic, mathematics, evidence, experiments, data and so on. But that happens surprisingly rarely. What we see in reality is that any debate quickly takes the form of who is right and who is wrong. It becomes my hypothesis versus yours. Science is not formally driven by voting. But at some level it takes the form of voting. The community ultimately accepts, rejects or ignores a given hypothesis, possibility, opinion, interpretation and the like. This form of voting is worse than political voting because in a democratic process every vote has the same weighting. In science the elite vote is orders of magnitude heavier than a student’s or young researcher’s vote. Often there are experiments, opinions or interpretations that could be logically correct, supported by evidence but have only a minority of followers. Such pieces, even if published are often ignored by the community rather than debating on it. Of late, even the cancel culture seems to have entered science. So a minority stand is not allowed to be expressed, or is quickly labeled ‘anti-science’ or ‘mis-information’.  It is almost impossible to publish a non-conformist hypothesis, experimental result or data analysis in a peer reviewed journal. And the ways of reaching people directly through social media are also closed owing to the increased pressures on social media to sensor ‘misinformation’.

On this background Dr. Anthony Fauci saying, “Attack on me is attack on science” is a 100% political statement. Given that he was responding to political attacks, he might be fair in saying this. But this demonstrates how quickly science becomes politics. Fauci’s is a dangerous statement because the line between science and politics is very thin. Having a scientific opinion against the mainstream Covid preaching is not an attack on science. Disagreeing with the mainstream is never an attack on science. In fact, cross questioning is an essential exercise in science. Without alternative thinking, without competing hypothesis and without challenging mainstream beliefs there remains only pseudoscience. During the Covid pandemic we have seen the tendency to suppress, ridicule and cancel the non-mainstream opinions very frequently. From health authorities and people of mainstream science, public statements and assertions were being made without adequate evidence. The clean chit against the lab leak hypothesis, the insistence on social distancing and use of masks, the imposed lockdowns were all without sufficient data support. In emergency, you may not be able to wait for support. You may have to start doing something with a belief. This is fair, but it shouldn’t have been projected as the word of ‘science’. As soon as data accumulate, all the hypotheses and measures taken based on it need to be reexamined and policies changed accordingly. It is likely that what is seen to be working over a short term might become counterproductive in the long run. So, even tested and published results need to be rechecked in the altered context. But such possibilities are not even being discussed.

By human nature once you advocate something, it gets associated with your ego. A challenge to a policy becomes a challenge personally to you, and then it s not easy to be open to any change. This is in no way restricted to Covid related issues. It has been common throughout the history of science. It became much more serious after peer reviews became the mandate in the 1970s, because peer reviews became an authenticated tool to reject inconvenient evidence or interpretations and impact factors became the convenient tool to ignore them even after being published.  

I believe that the cause of the problem lies in the evolved human nature. Human reasoning has evolved to judge humans and to take sides, not to make unbiased judgments on issues. The question who is right and who is wrong is central to the evolved human mind. What is right may be the appropriate scientific question, but in no time we slip on to the ‘who’ question without even being aware of having done so.

In politics where there are several issues you may agree with someone on some of the issues but not on others. Potentially this can be a very complex process. But in reality, it turns out to be surprisingly simple. Very few people appear to be confused about whom to vote, because they agree on certain issues and disagree on others. For most people, the decision comes quickly and clearly. This is because when you like a leader or a party, you tend to agree with most of their stands. This also seems to be driven by basic human nature. Often the opinion about a person is not made based on issues, the opinions about issues is made based on the person.

This is the reason why the pursuit of science without bringing in politics is so difficult. It may not be impossible but it doesn’t come naturally. You need continued conscious efforts to see that you do not slip on to personal judgments, lobbies, individual cost-benefits or power games in supporting or opposing a stand. The tendency to selectively ignore or cherry pick evidence also comes from making it my hypothesis versus yours. There might be a positive side to it. It is possible that mine versus yours adds more spirit and interest to the debate, but I think the negative side far outweighs the possible benefit. The spirit of focusing on ‘what is right’ and weeding out ‘who is right’ should be a part of basic education and training in science. Training in science today is almost entirely training in tools of science. We rarely talk about the methods of science, the appropriate mindset for science. Scientific studies on the process of research have been very primitive and limited to a few mundane questions. I think possible solutions to the problem of frequent slipping into politics  may lie in the undergraduate classes.

Covid 19: Kumbh, Election rallies and the second wave surge

This virus does not seem to stop giving surprises. So far so many predictions from people of science, including some of mine, have failed. I did not think, for example, that the second wave will be so sudden and large, although CFR has kept on dropping as I predicted. Even amidst the oxygen crisis and other problems with the patient care system, CFR in this wave is much smaller than last wave. But the total deaths are much greater than expected simply because the transmission has been just too rapid.
But what caused this rapid transmission? Everyone seems to blame the election rallies, the madness called Kumbh and elsewhere normalizing life, traffic, transport and travel. While I feel normalizing life is inevitable and must happen, I won’t count a religious gathering of that scale or election rallies as a necessary part of normal life. If examinations also were postponed, why not elections? If at all elections were necessary, why not restrict campaigns to TV, radio, mobile and other media? These have reached every corner of India. It was too obvious a conclusion for everyone that crowding during election rallies and the Kumbh has caused the terrific surge.
However, as a science teacher, I am not content with simple looking logic, opinions and consensus. I wanted to look at data. Particularly when I see so many people convinced about something without giving actual data, I become restless and I want to see the data myself. So I tried to assess how much is the contribution of Kumbh and election rallies to the second wave.
The surprising results are here. The peak of Kumbh crowd was on 14th April and a couple of adjoining days. If there was a large scale viral transmission during this crowding, people would have disperse back to their homes throughout the country, then after a few days of incubation, they would be sick, and eventually will spread the virus further around. We will see a nation-wide rise over and above the background rate of spread. So the slope of the curve would increase significantly a week or two following Kumbh. But we just don’t see any trace of this. The slope does not increase, in fact it decreases a bit.

At this time, it is likely that testing facility is also saturating, so not everyone gets tested. The cases could be an underestimate. When testing facility becomes limiting, only individuals with symptoms are given preference. So a large number of asymptomatic and mild cases escape detection. This is reflected in an increase in the test positivity rate. There have been various estimates of proportion of asymptomatic cases. I took the largest estimate of 80% positives being asymptomatic. Therefore behind every rise in TPR there should be 4 fold more rise in undetected cases. The data can be corrected for this possibility. Even after making this correction, we do not see any increase in the slope following Kumbh and a reasonable time lag. This means that Kumbh had a negligible effect on incidence on a nation wide scale. One can always argue that if Kumbh wasn’t there the curve would have come down more quickly. This is an untestable statement. You can’t give evidence either way. We can say at the minimum that Kumbh did not seem to have increased the pre-existing rate of transmission.

Figure 1: The time trend in the number of new cases in India before and after the Kumbh event (blue arrow). The slope does not show any increase as expected.

The case with election rallies is a little different. It somewhat coincides with the beginning of the second wave. But if you allow for the lag due to incubation period, the wave would have begun prior to election campaigns but only marginally, which we can ignore. But here we have another means of comparison. Only five states had elections. So we can directly compare the transmission between states with and without elections. That’s what you see in the second graph. The states with election fall within the range of states without elections. Their transmission rates are in no way greater than states without elections.

Figure 2: The rise in daily new cases in states with elections (solid lines) and without elections (dotted lines). The ones holding elections did not show a higher transmission rate than the normal range at any time. Since the starting incidence of different states was widely different, they are all normalized by the number of cases they had on 1st Feb, so that their further progression becomes comparable.

So I won’t blame the Kumbh and elections solely for the wildfire of transmission. They had a negligible role seen in data even after correction for biases. So we have to look for alternative causes for it.

Perhaps this is not so surprising because I showed in a previous blog (https://milindwatve.in/2021/02/21/covid-19-did-lockdown-work/) that during the downward phase of the wave, even the Bihar elections and Farmers’ agitation did not affect the downward trend in any way. These patterns, combined with our prior finding (https://www.preprints.org/manuscript/202104.0286/v1 ) that globally, preventive restrictions have only marginally affected the slopes of the curves, provokes a serious rethinking of all that we were told about transmission. It is also being realized that longer distance airborne spread of this virus is much more common than what was believed earlier and that the chances of spread outdoors are considerably smaller than indoors (https://www.nytimes.com/2021/05/07/opinion/coronavirus-airborne-transmission.html?fbclid=IwAR0sld85XEG1v_m-Pp-gVby6qvjBMa0Fst0ETxPS7448aNDb1fdXjwHoqeg ). All the crowding incidents that we are talking about here are under the open sky.
When there is a conflict between our beliefs and data, I prefer to go by data, keeping margins for its biases and inadequacies. Everyone doesn’t. They believe in data when it supports their beliefs. They say data not good when it doesn’t.

But the pandemic hasn’t ended. More surprises might be waiting for us and we should be prepared to see more data, test more, interpret more, learn and unlearn more. The virus, so far, seems to have evaded science in a big way. It’s better we take every definitive statement from health authorities with a pinch of salt.

Peer review quality, acceptance and rejection

Two papers from my group got published this week itself. And I am amazed at the range of peer review quality experienced. I had written earlier about the experience with Current Science. The same manuscript, with marginal refinement was accepted by PeerJ and got published this week (https://peerj.com/articles/11150.pdf). Although acceptance makes the authors happy, the peer review quality was equally disappointing as the Cur Sci experience. Two good things about this journal are that they make the peer review public and they ask authors’ feedback. I wrote a feedback that although the paper was accepted, the peer review quality was disappointing. But our earlier experience with this journal was good. A paper published earlier in this journal had critical and balanced review. It is just too common that there is large variance in the peer review quality of the same journal.

The other experience was diametrically opposite. Our work with farmers near Tadoba got published yesterday in the journal Conservation Biology, a leading journal in this field ( http://doi.org/10.1111/cobi.13746). This piece of work I count among the top 5 of my lifetime. The peer review of this manuscript was one of the most rigorous peer reviews I have seen in my life. The manuscript was difficult to review since it involved game theory, agriculture, wild life, human behaviour and social science. In addition we had done some non-conventional things. We did not pre-plan the methods. Obviously we could not take a prior ethics approval of the institutional committees. Ethics was addressed by field workers and farmers from time to time. We allowed the methods to evolve and the farmer participants contributed their thinking to the evolving methods. The farmers also interpreted the results their own way and we included their thinking in interpreting the results. So farmers were mid way between the subject of research and contributors to research.  Most important, none of us had any formal training in social science research and we did it only using common sense and the need of the time as felt in the field. I could perceive many potential problems of violating research norms. But we kept everything transparent. Did not pretend or hide anything.

Three reviewers responded, each one coming from a different field. So editors had carefully taken care of the multidisciplinary nature of the work. All the three appreciated the central idea as well as our contextually flexible ways as novel and relevant but at the same time raised a number of issues about the details. For addressing all of them three of the authors had to work hard for days on end. This was the most rigorous revision of my life. But everything being of high intellectual quality there was a deep satisfaction. In places they pointed out our weak points in the work. We responded admitting that this side of the work was weak but it remained weak for such and such reasons, which they seemed to accept. They had looked at possibilities beyond this paper and we responded to it from which the path to take the work forward was almost worked out. I want to write about this experience separately and in substantial details. So am reserving the details for now. Let me say here that I am deeply satisfied with this review quality although it was hell of hard work involved in revising.

At a later stage we faced a problem with this paper. The journal specifies many norms or writing style one of which was that all results are to be written in the past tense. We had results of modelling along with empirical work. There was no problem in reporting empirical work. Model results are funny. Simulations and parameter specific results can be expressed well in the past tense, but generalizations and predictions lose their meaning in past tense. The language editor helped us substantially in improving grammar, reducing word count without compromising on content etc. But modelling results in past tense was posing a problem. “Two plus two is four” has a meaning which is not captured by “two plus two was four”. The editor obviously having a superior knowledge of English grammar than us, we did not argue much but made suggestions. Ultimately this section became a hybrid of past and present. I suspect some meaning might have been compromised, but hope that the reader is not at complete loss. Barring this, the experience was a life time amazingly good.

Six years ago the same journal had rejected another paper of ours. The experience that time was that two of three reviewers were critical but positive, the third one was not critical about specific issues but perhaps our findings were not convenient for his stand and his beliefs. We could satisfy two reviewers but not the third one. The editor ultimately rejected our paper. That time also, the quality of reviews was mostly good. Overall, I feel reviews are like random samples. At times you get bad quality reviews even in good journal; at times good quality rejections, and at times bad quality acceptances, but very rarely challenging, thoughtful, balanced and rigorous reviews. I feel these rare events are the ones which maintain the quality of science.

Covid 19 and the “must do something” phenomenon

The last blog I wrote was titled “Did lockdowns work?”. That was more of an impressionist picture based on patterns in Indian data. But over the last few weeks, we analyzed global data for evidence of the effects of preventive restrictions (PRs) of all kinds imposed in all countries. Surprisingly, or not so surprisingly, we found that most of the restrictions worldwide failed in arresting transmission of the virus. The most ambitious objective of lockdowns is to “break the cycle”. A true break in the cycle is expected to arrest the spread completely if the lockdown works for a period slightly more than one infection cycle. This goal was rarely, if ever, achieved. A less ambitious but useful goal would be to reduce the rate of spread of the infection. Testing this is rather tricky because the rate can change spontaneously even if no preventive measure is applied. Therefore it is necessary to separate spontaneous change in slope from the PR induced change in slope.

This problem is like the son-daughter problem. Whether the sex ratio at birth differed from 1:1 cannot be inferred from one or a few families. A given couple can have three consecutive daughters by chance alone. We need a large population to reach a conclusion. Similarly in this analysis which country was successful in reducing the rate by imposing a lockdown cannot be ascertained, because it could be mere chance as well. But the overall success rate can be estimated with confidence. Using this approach, we estimated the success rate of PRs in reducing the transmission of the virus. It turned out that only 4.5 % of the total PRs were successful in reducing the transmission significantly. In a large number of cases the transmission actually increased by imposing a restriction. Quite a number of times the transmission decreased after lifting or relaxing a restriction. This means factors other than restrictions were stronger than the effects of the restrictions. The imposed restrictions could explain only 6.1 % of the total ups and downs of the epidemic curves.

This paper is now available as preprint. https://www.preprints.org/manuscript/202104.0286/v1

Anyone interested in technical details of the analysis can refer to that. Since our inferences are most likely to be viewed as politically incorrect, I don’t know how and how long the peer review will go. But the data are in public domain and the analysis is transparent. So anyone can make an opinion. Just showing two poor correlations here. One is between the stringency of restriction and the expected change in transmission rate, which is not significant despite very high sample size. The other is between change in stringency (i.e. either imposing or relaxing restrictions) and change in transmission rate. This is statistically significant but the strength of the relationship is very poor.

The inefficiency of lockdowns is not surprising. Epidemic is a complex system and simple measures may fail to work. What is surprising is the fact that people are made to believe that this is what is going to work. If the infection spreads, it must be because you were irresponsible, you did not take care. If in some area, the cases went down, we say it was well managed. Then what about countries where there was excellent control in one phase and an uncontrolled surge in another? Saying that people followed a restraint in one phase and did not follow it in another is a circular statement, unless there is an independent and well quantified measure of the restraint.

To me what is more important is the psychology behind this. Medicine, public health as well as political administration does not like to say that we can’t do anything. There is a need to pretend that we are doing the right thing and we are doing our best. Lockdown is an ideal demonstration that we did something. Whether it was effective or not, is immaterial.

There are examples in medicine other than lockdown ddemonstrating the “must do something” phenomenon. Remdesivir, hydroxychloroquine and convalescent plasma completely failed in clinical trials. The trials were conducted by reputed organizations and published in flagship journals. But in spite of completely failing in clinical trials, remdesivir is sold as the leading drug everywhere. In my own city, the stocks are nearly exhausted, people are mad after getting it and it is being sold at a high price. This is because, as long as there is no effective antiviral drug, we need to pretend that there is something that works and need to show we are doing our best. Control of blood sugar to arrest diabetic complications (in the case of type 2), blood pressure control to avoid stroke, cholesterol lowering to avoid heart attack have all performed poorly in randomized clinical trials, but these are the most widely sold drugs.

The “must do something” phenomenon is not restricted to medicine. We see it in so many examples including state administration, crime control, business crisis, parental behavior, child behavior and so on. It must be giving a social advantage to the individuals or agencies in control. This advantage predominates and overcomes actual concern. The concern and the criteria of success itself are then shifted. Rather than avoiding complications or deaths, reducing sugar or cholesterol itself becomes a measure of success. This is an interesting phenomenon and I am sure people doing this do it honestly and often with good intensions. They don’t want to be aware that what they do does not work in reality, because working in reality is no more the concern. I did something is the feeling giving satisfaction. This is understandable as a social phenomenon, but my worry is that all this is being sold under the name of science. People are never made aware that the “something” did not work, or worked very poorly, hasn’t worked so far and if it works in your case it might be nothing else but chance. I wish that at least people of science should be aware of this and avoid the trap.

विज्ञान, वैद्यक आणि ब्राह्मण्यवाद

सुरुवातीलाच स्पष्ट करतो की ब्राह्मण्यवाद या शब्दाचा जातीनं-जन्मानं ब्राह्मण असण्याशी काहीही संबंध नाही. ज्ञानावर विशिष्ट समाजाची मक्तेदारी आणि बाहेरच्या समाजाला त्यापासून वंचित ठेवणे म्हणजे ब्राह्मण्यवाद. इतिहासात काही काळ ब्राह्मणांनी हे केले. पण पुढे ज्ञानावरची मक्तेदारी झुगारून ज्ञानगंगा सामान्य लोकांसाठी खुली करण्यासाठी ज्यांनी प्रयत्न केले त्यात ब्राह्मणही होते. त्यामुळे आज तरी ब्राह्मण आणि ब्राह्मण्यवाद यांचा पूर्वीसारखा संबंध राहिलेला नाही. ज्ञान सर्वांसाठी खुलं असलं पाहिजे हा आजच्या विज्ञान युगाचा मंत्र आहे. पण तरीही मागल्या दाराने विज्ञानाच्या क्षेत्रातही ब्राह्मण्यवाद शिरत राहतो. सामान्य माणसाने आणि वैज्ञानिकांनी सुद्धा जागरूक राहून त्याला दूर ठेवणे आवश्यक आहे.

वैद्यकीय क्षेत्रात एक वेगळा ब्राह्मण्यवाद आहे. आज सामान्य माणूस नेटवर अनेक गोष्टी वाचून येतो. डॉक्टरांनी जे सांगितलं त्यावर शंका घेतो. डॉक्टरांवर पहिल्यासारखा निःशंक विश्वास टाकत नाही.  रुग्णाच्या विश्वासाची काही प्रमाणार तरी बरे वाटायला मदतच होते. पण माहिती अधिकाराच्या युगात आता ही गोष्ट अवधड होत जाणार आहे. चुकीची किंवा अर्धवट माहिती मिळाल्याने गोंधळ वाढू शकतो हे खरं. पण यावर कुणी माहिती वाचूच नये अथवा शंका घेऊच नये असा उपाय बदलत्या जमान्यात चालणार नाही. त्यापेक्षा लोकांपर्यंत सर्व माहिती, सर्व संशोधन, सर्व नव्या घडामोडी पारदर्शकपणे, सोप्या भाषेत पोचतील याची त्या त्या क्षेत्रातल्या लोकांनी काळजी घ्यायला पाहिजे.

अनेकांचा असा आग्रह असतो की विज्ञानातील ज्या गोष्टी वादातीत आहेत त्याच फक्त सामान्य लोकांसमोर मांडल्या गेल्या पाहिजेत. पण माहिती युगात हा दुराग्रहच ठरणार आहे. कारण जी गोष्ट जशी असेल तशी समजणं हा आता मूलभूत अधिकार होतो आहे. ते योग्यही आहे आणि अपरिहार्यही. उदाहरणार्थ मीठ खाल्ल्याचा रक्तदाबाशी नक्की काय संबंध आहे, अंडी खाल्यामुळे रक्तातील कोलेस्टेरॉल वाढतं की नाही, कोलेस्टेरॉल कमी केल्यास हृदयरोग टाळता येतो की नाही, औषधाने रक्तदाब कमी केल्यास रक्तदाबाचे दुष्परिणाम खरंच कमी होतात की नाही, औषधांनी रक्तातील साखर नियंत्रणात आणल्यास मधुमेहाचे सर्व प्रकारचे दुष्परिणाम टाळता येतात की नाही याबद्दल विज्ञानाच्या कसोट्यांना चोख उतरतील असे कुठलेच अंतिम निष्कर्ष निघालेले नाहीत. ज्या क्षेत्रात अशी परिस्थिती आहे त्या क्षेत्रात विज्ञानमान्यतेचा दावा करणं, संशोधकांमध्ये असलेले मतभेद लपवून ठेवणं, एखादा उपचार प्रभावी आहे हे सिद्ध झालेलं नसताना तो झाल्याचा दावा करणं ही लोकांची उघड उघड फसवणूक आहे.

आपल्या आयुष्यातल्या सगळ्याच गोष्टी विज्ञानावर आधारित नसतात आणि प्रत्येक गोष्टीला विज्ञानाचे निकष लावून काम करणं व्यवहार्य असेलच असं नाही. अशावेळी आपण त्या क्षेत्रात मुरलेल्या व्यक्तींचे अनुभव, चालत आलेल्या प्रथा किंवा कधीकधी निव्वळ अंदाजाने निर्णय घेत असतो. असं करणं चूक नाही. पण असे निर्णय विज्ञानाच्या मुखवट्यानी पुढे आणणं बरोबर नाही. सामान्य माणसाला यात विज्ञान नक्की कुठे आहे, किती आहे आणि ते कुठे संपतं हे समजण्याचा अधिकार असायला हवा. मग तो वापरायचा की नाही हे त्या त्या व्यक्तीनी ठरवावं. अनेक जण तो न वापरता विश्वासावरच चालतील आणि ते ठीकच आहे. फक्त विज्ञानाच्या कसोट्यांवर न उतरलेल्या गोष्टींना त्या वैज्ञानिक आहेत असं भासवणं अनैतिक मानले पाहिजे.  

अशी प्रच्छन्न अनैतिकता वैद्यकीय क्षेत्रात अजिबातच दुर्मिळ नाही. उदाहरणार्थ प्रत्येक औषधाची, उपचाराची एक मर्यादा असते. पद्धतशीरपणे केलेल्या वैद्यकीय चाचण्यांनी ही मर्यादा स्पष्ट केलेली असते. पण ही माहिती रुग्णांपर्यंत पोचतच नाही. उदाहरणार्थ सुमारे ११००० मधुमेहींना घेऊन केल्या गेलेल्या ADVANCE नावाच्या चाचणीमधे एका गटाला अगदी काटेकोर ग्लुकोज नियन्त्रणाखाली ठेवण्यात आलं, दुस-या गटात ढिसाळ नियंत्रण होतं. पाच वर्षांनंतर ढिसाळ नियंत्रण गटात २० % लोकांना या ना त्या स्वरूपाचे मधुमेहाचे दुष्परिणाम (diabetic complications) दिसून आले. काटेकोर नियंत्रणाखाली असलेल्या गटामधे १८.१ % लोकांना. काटेकोर नियंत्रणाचा फायदा एवढाच. वेगळ्या भाषेत सांगायचं झालं तर मधुमेहाचा एक दुष्परिणाम टाळण्यासाठी २५० व्यक्ती-वर्षे उपचार लागतात. म्हणजे १० मधुमेही व्यक्तींनी प्रत्येकी २५ वर्षे आपली साखर काटेकोरपणे नियंत्रणात ठेवली तर त्यापैकी फक्त एका व्यक्तीचा फक्त एक दुष्परिणाम कमी होईल. आणि ते करताना साइड इफेक्ट म्हणून काही वेगळाच दुष्परिणाम दिसणार नाही याची हमी नाही. एवढाच साखर नियंत्रणाचा फायदा आहे आणि अनेक वैद्यकीय चाचण्यांनी या उपचाराच्या मर्यादा स्पष्ट केल्या आहेत.

याच महिन्यात ऑक्सफर्ड विद्यापीठातून प्रसिद्ध झालेला एक शोधनिबंध असे सुचवतो की उच्च रक्तदाब औषधाने कमी केल्याचा मेंदूला तोटाच होतो आणि त्याने स्मृतिभ्रंश होण्याची शक्यता वाढते. यापूर्वीही संशोधकांनी असं दाखवलं आहे की मेंदूला रक्ताचा पुरवठा कमी पडतो तेंव्हा मेंदू रक्तदाब वाढवून तो सुरळीत करण्याचा प्रयत्न करतो. अशावेळी बळाने रक्तदाब कमी केला तर मेंदूला तोटाच होतो. हा मुद्दा वादाचा असू शकेल. पण असा वाद आहे हे पेशंटला कळू देऊ नका अशी भूमिका घेणं हा ब्राह्मण्यवाद झाला. कोव्हीडच्या उपचारांमध्ये दिली जाणारी अनेक औषधेही वैद्यकीय चाचण्यांमधे प्रभावहीन ठरली असूनही सर्रास दिली जात आहेत आणि महागड्या किमतीला विकली जात आहेत. कारण वैद्यकीय चाचण्यांमध्ये काय दिसलं ही माहिती सगळ्यांपर्यंत पोचू दिली जात नाही.

जेंव्हा उपचाराच्या मर्यादा स्पष्ट होतात तेंव्हा दोन प्रकारच्या भूमिका घेता येतील. एक म्हणजे उपचारांचा फक्त संभाव्य आणि अत्यल्प फायदा जरी दिसत असेल तरी उपचार केले पाहिजेत. दुसरी भूमिका अशी की फायद्याची मर्यादा एकीकडे आणि येणारा खर्च, असुविधा आणि साइड इफेक्टची शक्यता दुसरीकडे याचा विचार करता हा उपचार नाकारणंच योग्य ठरेल. या दोन्हीपैकी कुठल्याच भूमिकेला तत्वतः चुकीचं म्हणता येत नाही. पण दोन्हीपैकी कुठली भूमिका घ्यायची हे ठरविण्याचा अधिकार पेशंटला असायला हवा. तो अधिकार वापरण्यासाठी लागणारी माहिती त्याला न देणं हा ब्राह्मण्यवाद झाला. आज मधुमेह, उच्च रक्तदाब, वाढलेलं कोलेस्टेरॉल अशा गोष्टींवर केल्या जाणाऱ्या सर्व उपचारांचा फायदा अत्यंत मर्यादित आहे आणि तो किती मर्यादित आहे हे उपचार घेणाऱ्यांपैकी बहुतेकांना माहीतच नाही ही खरी समस्या आहे. ही माहिती सामान्य माणसापर्यंत पोचू नये असं वैद्यक क्षेत्रातील अनेकांना वाटतं. हा ब्राह्मण्यवाद आहे आणि त्याचं संपूर्ण निराकरण करायला हवं. थोडक्यात संशोधनाची पारदर्शकता, संशोधकांनी स्वतः सामान्य माणसासाठी सोप्या भाषेत लिहिणं अशा गोष्टी तर आवश्यक आहेतच पण अन्न व औषध प्रशासनासारख्या व्यवस्थांनी प्रत्येक औषधाच्या मर्यादा औषधाच्या वेष्टणावरच छापण्याची सक्ती करणं आवश्यक आहे. वैद्यकीय व्यावसायिकांनी आणि ग्राहक संघटनांनी तसा आग्रह धरायला हवा. उद्याच्या पेशंटचं आणि वैद्यकीय व्यवसायाचंही हित अशा पारदर्शकतेतच असणार आहे, छुप्या ब्राह्मण्यवादात किंवा दिशाभूल करणाऱ्या अर्धवट माहितीच्या प्रसारात नाही.

Covid 19: Did lockdown work?

Science progresses by making and testing hypotheses. Often principles, practices or policies which look sound at one level get falsified at later stage when examined with more data. Rethinking and re-examining well accepted norms is an important part of science, failing science becomes a religion. In March 2020, when the threat of the pandemic was suddenly realized, it was inevitable to suggest and implement a set of measures to arrest the transmission and things that looked logical that time were suggested and implemented. As time passed, data started pouring in and in the light of the data it was necessary to re-examine what was working and what not. It was stupidity to expect that the same things would work all over the world given the wide variety of contexts and conditions. So according to data getting accumulated, the policies should have been re-examined, refined to suit local contexts and withdrawn if found useless. This process should have started by May or June 2020 itself as data started accumulating.

Although delayed substantially, the process of introspection appears to have begun now, at least in a small way. Two papers last week appearing in NEJM and BMJ reexamined school closure as one of the measures to arrest the epidemic. The study in Swedish schools that did not shut down shows no evidence of higher rates of illness among students and teachers in schools that remained open. The other Belgian study says that the negative impact of school closure on kids appears to outweigh the weak evidence that closing the schools might have actually reduced transmission. We need more such data based introspections and re-appraisals of every policy. This analysis might help us at least the next time a new respiratory virus appears.

I am examining below the possible impact on the spread of infection of implementing lockdowns, lifting of it and undue crowding of people without taking necessary precautions. We have ample opportunities to examine the actual impact of crowding without following the necessary norms. In October and early November there were elections in Bihar which were accompanied by large public meetings, rallies and celebrations by the winning candidates. In the following month there were elections in Jammu and Kashmir which in some areas showed record voter turnout. From late November farmer agitation started on an unprecedented scale with huge processions and gatherings which have continued for three months now.

An epidemic follows a typical shape of the curve when plotted daily or cumulatively. If lockdowns really worked we should see at least a temporary downward shift in the slope, especially on a log scale. Conversely whenever restrictions were lifted or there was large scale crowding, there should be a local upward movement of the curve. Here are the time trends plotted along with the dates on which restrictions were imposed or lifted, election campaigns, rallies, big meetings or celebrations were conducted. The upper curves are cumulative incidence on log scale, lower curves daily new cases (7 day running average) on linear scale. All data taken from https://www.covid19india.org/

The nation wide trend: See that lockdown did not decrease the slope and the unlock or crowding events did not increase it. The peak in the daily incidence trend and the following downward trend appears to be independent of all these events.

The Kisan Andolan should have theoretically been the ideal occasion for spreading the infection. But there appears to be no increase in the incidence in Delhi, nor in Punjab and Haryana (below).

Overall, the shape of the cumulative trend appears almost unaffected by any of these events when examined at national or state level. The daily trend has many ups and downs which have no association with the events of crowding and norm violation, even after allowing realistic time lag. So in regional or national time trend, there is no evidence of either the lockdown having improved, nor of crowding and norm violation having worsened the situation.

An epidemic is a complex phenomenon and just owing to its complexity, it’s no surprise that some simple minded solution might have failed to work. If some policy did not work as expected, it should not be taken to mean ignorance, stupidity, failure or lack of foresight of the policy maker. We just need to accept that this measure did not work and plan alternative measures that are likely to work better. But the inability to accept that something failed at least in some specific context, is the sign of being dogmatic. If lockdowns have not worked in the Indian context so far, it makes no sense to have them again if faced with some local  surges. Saying that it worked in New Zealand has no relevance. Inability to learn from data is a sign of not understanding science and not being ready to use it appropriately. The top level Indian science organizations and health authorities need to give at least some proof that they also learn from data and rethink, re-examine and refine the policies in response to observed data. Otherwise there will be no difference in the approach to build a Ram temple and the approach to curb the epidemic. One is clearly a religious issue, but the other is expected to be scientific. So let the latter be evidence based rather than faith based.

Logic versus number: What scientists value more?

Science has a set of principles and the attempt is to achieve as many of them as possible, although not in every study all the principles can be complied to. For example, a famous Lord Kelvin quote says, “When you can measure something you are talking about and express it in numbers, you know something about it.”. At times some important factors are not objectively measurable and then one has to settle either on a qualitative analysis, or use some surrogate marker etc. This is an inevitable limitation at times but I don’t consider this as a major issue in the scientific method.

A trickier situation is when two principles are in direct conflict. If you try to achieve one, you need to compromise on the other. The interesting question is what do we do when there is such a conflict? The question is not restricted to one researcher’s decision. In today’s scientific publishing system, it should be acceptable to the reviewer. Only then you can publish. How frequently a conflict of principles can arise? I don’t know, because this question wasn’t there in my mind until recently. I have at least one example of such a conflict now, which opens up a new subtle but important question in the philosophy and methods of science.

Over the last few years I am increasingly getting interested in how common people use innate statistical analysis to make inferences from their own sampling and observations. To some extent this question is considered by Bayesian statistics, but my question is much broader and many aspects of which are not covered by Bayesian statistics. One such question we attempted answering using a small experiment is uploaded as a preprint now, but not yet published (doi: 10.20944/preprints202012.0200.v1).

Another thing I wonder about is, what do people do to make a dichotomous decision based upon their own sampling/observations? In formal statistics we have a concept of statistical significance, which is used to make a dichotomous inference by taking an arbitrary cut off level of significance. The most commonly used significance level of 0.05 is arbitrary but well accepted. I am trying to observe at what level people make an inference? For example, when do I decide that this cap is lucky for me? How many times a favorable event needs to be associated with that cap so that my mind considers that cap as lucky? What I feel currently is that we keep this level variable depending upon the cost of making a wrong inference versus the benefit of being correct. In this case, being particular about wearing a certain cap has a small cost. Being successful in an exam, for example, is a big gain. So if I make a wrong inference of a significant association or actually causation, I have little to lose. If the association/causation turns out to be true, I have a big gain. So the cut off for making this association will be kept very liberal. This is how and why our mind keeps on generating many minor superstitions. It is a very logical and clearly adaptive tendency of the mind.

My real question begins here. It is most logical to keep the significance level variable depending upon the cost and benefits of right versus wrong decisions. Then why do we have an almost universal significance cut off as 0.05? In principle, statistical theory allows you to make the cut off more stringent or liberal depending upon the context. But hardly any researcher uses this facility; 0.05 has an almost religious sanctity. Why don’t we consider the significance level a function of the cost-benefits of an inference?

I don’t have a final answer but I am tempted to believe in this. The cost benefits are seldom objectively measurable. Most often they are value laden judgments. Two persons’ value judgment need not be identical. So it is difficult to arrive at a new agreed alpha appropriate for every context. Often people might agree that going wrong will be costly in this context, but exactly how costly can only be a subjective judgment. They may not have any precise numbers to agree upon.

We tend to avoid this loss of objectivity by compromising with the logically sound concept that significance level should be a function of the cost-benefit of decision. The level 0.05 is no way more sound than any one’s cost-benefit or value judgment. But it is precise, it’s a number, it has precedence and others, particularly your reviewers are unlikely to object to it. So we prefer being precise over being logical. This is a clear case of conflict between two scientific principles and almost universally we have preferred to be precise and numerical, at the cost of being more logical. Are there more examples of such a conflict or trade-off between scientific principles? I feel we are quite likely to see more. I have absolutely no idea whether philosophers of science have elaborated on any such trade offs.

At least on this issue, I feel the innate statistics of illiterate people may be more fuzzy bit still is more sound than the scholarly statistics to be found in the big textbooks and expensive statistical software. But they are illiterate after all and are not supposed to know the scientific method!! To me this is one more example demonstrating that science has much to learn from illiterate people. They are more logical in changing the significance level by the cost-benefit judgment. Mainstream science is more stupid for insisting on precision, on numbers, on consensus and that results into a religious significance cut-off.

Indian scientists not allowed to think novel – Current Science

Right at the onset I must say that this is not a criticism of Current Science. Current Science is an extremely valuable journal. Science publishing today is increasingly being monopolized by a few publishing giants. They charge the readers heavily and a big controversy over free access to knowledge is currently on fire. But what is even more weird is that they charge the scientists heavily to publish their papers. I have been briefly in the fields of performing arts, theatre and literature in my life. In all these field every contribution is remunerated, in a small or big way. When I talk about an author having to pay, my non-scientist friends just can’t believe!! This can’t be anything else but scandal, they feel. Science is the only field where a contributor has to pay for contribution to knowledge!! But the publishing giants have successfully made this a norm. Most interestingly, scientists (including myself) are the most stupid, helpless and gullible people to fall prey to this utter non-sense.

In this desert of stupidity, journals like Current Science are the last surviving oases. They neither charge the author, nor the online reader. They are run by the academies of scientists using public money. This is a great service and journals like this need to survive the global downfall of ethics in science publishing. Recently I got a weird experience with Current Science, which reflects more on the mindset of the mainstream community of Indian researchers, rather than reflecting on Current Science itself. As a common man and science lover, it is my duty to make this incident public.

To relate the story briefly, with two coauthors, one being a college teacher and another, a first year student, I communicated a paper to CS. In this we had used a simple mathematical expression that we thought works well as an index to reflect a pattern of our interest. Then there were a number of statistical arguments based on it. In due course of time the review response was received. There were comments by only one reviewer. His main objection was that this ratio had not been used before. There was no precedence and therefore we couldn’t use it. Any inferences based on a new index were not valid according to him. There was no other objection about the use of the ratio. He did not say anything about the ratio not being appropriate to answer the question addressed, the ratio having some undesirable properties that could lead to a bias or anything of that sort. His only objection was that there was no precedence of using such an expression so our entire argument was invalid!! Then there were a few other comments which we thought we could reply to or incorporate changes in the manuscript. In the reply, we added a supplement exploring the mathematical and statistical properties of the new index, ran simulations to show how the ratio behaves and argued that it was appropriate to serve the purpose. It would have been a fair rejection if he argued against the ratio with some logic, mathematics or statistics in support, if he thought our simulations were inadequate to prove our point and so on. He could have also said, whenever you are using a new expression, you need to be more careful. You need to do this, this and this before you bring it to a publishable level. This would have been a scientific and useful debate. Independent of agreeing or disagreeing, I would certainly have respected it. I really enjoy such debates. But no! On seeing the revision (or perhaps not even seeing it) he said again that you cannot use an index that does not have a precedence. He also said that our paper contradicted some recently published papers (without citing any paper) therefore our argument was flawed. There was no other reason given why he thought our argument was flawed.

In brief you cannot talk about anything that established scientists have not said before, forget about contradicting them!! The manuscript was ultimately rejected based on the single reviewer’s recommendation. Acceptance or rejection is not the issue here at all. All researchers know that it is a part of the game. It is not always logical. Subjectivity in the decision is inevitable. Chance plays a great role. But what is important is the basis on which a rejection is recommended. It simply means that introducing any new concept in science is only a monopoly of a few elites. Lesser mortals like you are not allowed to talk about anything new.

I wrote to the handling editor, keeping other editors in cc whether by the current science norms, not having precedence and contradicting earlier publications was considered a sufficient reason to reject a paper. A lot of correspondence followed. Most of it was either about the rejection, either justifying it or consoling us. But we had never challenged this particular rejection. This specific question whether not having precedence can be a valid reason for rejection did not receive a clear answer. The entire editorial correspondence is available for interested readers here (https://drive.google.com/file/d/1w6N7OlklORe3nkpUcYgl5Fcf3RB6CeRi/view?usp=sharing). The reviewer’s comments, our replies (https://drive.google.com/file/d/1hBfAyrK8MDr0gcebX0WtE7xqFq-ReRCe/view?usp=sharing) and the original and revised manuscript is available here (https://drive.google.com/file/d/181wDJls8pGKsGp-B_f2gyXVzvhWXrvZ_/view?usp=sharing). Readers can make their own judgment.

In India, this is not the first time that such a thing is being experienced. Whenever something really novel comes from India, people look at it with suspicion. If something comes from the west, they generally have no problem. The modal Indian science community is still largely in the slavery days. Colonial era hasn’t yet ended. White man is still the master in the field of science. In India you may do some fill in the blanks kind of work, add marginal novelty for the name sake but Indians are not supposed to pioneer anything entirely new, small or large. It will not be considered science unless there is a white skin stamp on it. We recognize Indian scientists only by the honors they might get in the western scientific world. There are many science academies in India whose fellows are the most renowned researchers. These academies have several excellent journals such as CS. But the academy fellows themselves are always looking for publishing in the western journals with high impact factors. Publishing in an Indian journal is below their dignity, or only the last option if no western journal accepts their papers. One who does not do a post doc abroad is not worth even considering a scientist!!

About 20 years ago a senior scientist told me that he wanted to nominate me for Bhatnagar Award. Being just a science teacher, I did not expect this. Bhatnagar is not meant for science teachers. But to respect him, I provided the list of my papers and all other information needed. At that time I had published in PNAS, Lancet, Amercan Naturalist among others. But two of what I considered my best papers were published in CS. So in the list of my five most important papers I listed the CS papers with priority. One scientist who was on the Bhatnagar selection committee then, told me years later that when the committee members saw Current Science papers in the best paper list, within seconds my nomination was discarded. They didn’t even read anything further. In order to call it a good paper it has to be published in Nature, Science, Cell! How can an Indian journal paper be considered for  a Bhatnagar? Fellows of the Academies do not believe in their own journal! This is the level of self esteem of Indian Science. How can we expect path breaking work coming from India? Whenever it actually does, it is entirely the greatness of that exceptional individual, without community support, or in fact, is spite of the community.

On the other extreme are the fanatics of ancient Indian science. They are equally bad, if not worse, for the progress of Indian science. They think that all of Indian science happened thousands of years ago and now nothing is left to be done. So either way, there is no support for novel ideas originating from India. If you are doing science in India, and want to make a successful career, you should not seek too much of novelty. It is not allowed by other Indians. Originality makes life harder. Live a simple life and be successful by being a follower. You are not allowed to be a pioneer in this country. What happened with the CS review was only an inevitable reflection of the mindset of mainstream scientists in India. Therefore I don’t see any particular editor or reviewer being “wrong”. No individual in particular can be blamed because it is a community characteristic influencing individual behaviour. Rejecting originality by an Indian is the norm in Indian science. Only a handful might be exceptions. In such a community, what else can we expect?

Are clinical trials relevant?

I happened to read two things on the same day. One was some post on social media that said Remdesivir is not effective. I don’t believe on such posts immediately so I looked for the original paper. It was a Lancet paper describing a multi centre clinical trial of Remdesivir which found no beneficial effect on Covid-19 patients. Within a few hours I came across a newspaper carrying a two column news with a picture of people demonstrating on streets demanding Remdesivir be made available. The background is that Remdesivir has been among the costliest drugs for the treatment of Covid. It’s being sold in black for tens of thousands of rupees per dose. People want to buy it desperately, obviously because their physicians prescribed.

I then looked for more published clinical trials of Remdesivir to find that two more peer reviewed publications had said that there was no effect, and two said there was some effect. It’s not new that different clinical trials have somewhat different results. This is possible by chance alone plus there are subtle differences in the trial design, randomization protocols used, patient groups, the locally prevalent genotype of the virus and so on. Such differences are common when the effects are marginal. In such cases factors like conflicts of interest, conformity bias, publication bias suddenly become extremely important. For any drug that is extremely effective, different trials may differ slightly in the magnitude of the effect, but all are unanimous about there being a significant effect. For such cases conflicts of interest and biases do not interfere much. Anything that is really effective gets unanimous support of trials. Anything that does not have consistent support across studies, has doubtful and at the most edge effects. Putting all the trials together, it is clear that Remdesivir does not make a convincing case, at the most it may have marginal benefits.

So why are people paying such a high price for a drug whose effects are doubtful? A simple behavioural reason is that when there is no good choice, people go for the best among the bad choices. The meaning of ‘best’ here is not a scientifically tested ‘best’, it is the one which is marketed most aggressively or most tactfully. I was not surprised when I realized what is happening with Remdesivir, because I have seen the same thing has been happening with diabetes (type 2) drugs.

No clinical trial has shown that normalizing blood sugar with any drug can arrest diabetic complications. Trials like UKPDS and ADVANCE showed some marginal benefits of treatment. On the other hand, trials like ACCORD and NICE sugar trial showed that mortality is higher in the carefully sugar controlled group as compared to the moderately controlled group. Further there were many obvious flaws in the study design of the trials that showed marginal positive effects. For example, UKPDS, which is said to be so far the most successful trial to claim some positive effects,  did not have any placebo control group. In a disease like diabetes, there are two levels of possible placebo effects. A placebo effect means getting better just by the belief or feeling of getting better. One is the feeling that I am being treated. This can be controlled easily by having a control group with blank pills. The other level is the feeling that “Oh, my sugar is normal now!” Some positive physiological effects are possible because of this feeling alone. To control for this second level placebo, one needs to have a group which is not treated with hypoglycemic drugs but who are made to believe that their blood sugar is normalized. Such a placebo control has never been kept by any of the clinical trials. So the chance that the marginal positive effects that a trial showed are only because of this feeling, is never eliminated. In short, there is no scientifically sound proof that normalizing sugar has any positive effect on the pathophysiology of diabetic complications. And still, all antidiabetic medicine focuses primarily on reducing blood sugar. All these perfectly useless drugs have an annual turnover in hundreds of billions.

This means that the actual results of clinical trials have no bearing on practicing medicine. This has been demonstrated by not one but several examples. An entirely different set of principles operates in the drug market. Patients and doctors are equally gullible. There is no doubt that some medicines have been really effective and they have sound and robust science support. But for diseases where there is currently no real effective medicine, everyone is being fooled by the best of the bad drugs. If there is no real solution as of now, will you go for something that doesn’t work? Surprisingly the answer is ‘yes’ for most people. Rather than facing the reality that nothing works, people will try anything including witchcraft, magical, spiritual healing practices. We can’t blame them because many of the mainstream medicine practices are not different from witchcraft. They are there not because of their efficiency, proven in clinical trials. They are there because nothing else works, so why not try this? When there is no real cure, you still have to give something that doesn’t work.

And that actually works!! Not for curing any disease, but for the feeling of having done something, and of course for making money.

How peer reviews are degrading the spirit of science:

I have written many times about the problems in peer review systems. But most of this was from the author’s point of view. My experience from the other side is also quite rich. I have reviewed quite a few manuscripts for a variety of journals including some so called top ranking ones. This is not something I like to do, but it’s a part of researcher’s life, at least in the prevalent system.

A few months ago I received a manuscript for review. It was an interesting and quite off beat experiment. I liked the experimental design. The data and the analysis was all quite clean. But I thought the inferences that the authors had drawn were not quite logical. The results were being over-interpreted. The data only showed association but the authors confidently claimed a causal pathway without any further evidence. The results were specific to a given context but the inferences sounded as if a general law was being discovered. I wrote my comments accordingly saying that your experiments are ingenious but the interpretations need to be reconsidered. I also hinted that you don’t have to agree with my interpretation. If you think your interpretations are correct, you need to make them more convincing. A difference of opinion is not a sufficient ground to reject the manuscript, but kindly acknowledge in the paper that multiple interpretations are possible and reason out why you prefer one interpretation over the other.

The response of the authors was quite representative of a researchers’ behavior. Clearly they did not want to agree with my interpretation, which is fair. Having different opinion is a natural and desirable part of science. Open debates increase clarity, bring forwards many subtle aspects which would otherwise have remained hidden. But in a typical authors’ behavior with reviewers, they did not argue out their side. Instead, they pretended to agree with me, which obviously they did not. They did not want to change their original argument, but at the same time did not want to argue with the reviewer. So they rewrote the same inferences in a more round-about and ambiguous manner.  I was irritated. I would have welcomed a clear argument, even if it were different from mine. Respecting the authors’ right to differ with the reviewer, and the experiments themselves being interesting I did not recommend rejection. Finally the paper was published. Being in a high impact journal, was read widely and as I expected, came under some heavy criticism.

Attracting criticism is not necessarily a bad sign. It is a part of healthy science. The part that was clearly against the spirit of science is that the authors avoided committing blasphemy against the ‘reviewer God’. I know that this is not a stand-alone incident. This is the modal behavior of authors, the reason for it being very obvious. I have taken up an argument against the reviewers at times ending up in rejection. In a typical response, the reviewers and editors are not in a mood to argue further. You simple get a sweetly worded rejection. Any logical debate is impossible. Most authors are smart enough to avoid that. Some are even ready to change their entire argument for getting acceptance. This is business and getting published in a high impact journal is such a huge benefit than you can easily trade the quality of science to reap the benefit.

A number of times we see papers where the data actually contradict the main claim made in the conclusions. I had the opportunity to ask one of the authors of such papers and he told me that the ambiguous looking conclusions were actually rewritten after getting the reviewer’s comments.  There are two reasons for this trend. One is that a so called peer review is not really about a ‘peer’ relationship. It is more of a candidate and examiner relationship. An even better metaphor is that the editors and reviewers are God-men and the authors are worshipers. This makes practicing science one more religion where some are closer to the gods and others can access the gods only through them. The second reason is that the review remains confidential. So even if you make some logical somersaults there, it does not surface. What becomes public is a polished and painted argument covering up all logical cracks beneath it. The remedy for this is actually quite simple. Make all the reviews public, independent of acceptance or rejection. When everything becomes transparent, the arguments will have to become more logical. Differences of opinion will remain, but they will be open for the readers to judge. What can be better than this for the spirit of science? But do we really care for science?