When a team of researchers is employed by an agency such as an institute or a company, and they find something worth publishing or patenting, whose intellectual property is it?
By the norms followed the world over, it is the property of the agency by default. Other types of understandings are off course possible but that needs to be clarified in the employment contract or the appointment letters. If not, the agency has the right to decide the norms by which credits and any other benefits coming from the intellectual property can be shared and distributed among individuals participating in research. Generally this goes well.
But there is an unusually complex situation in Indian academia. Quite often the fellowships or salaries of the students, project assistants, laboratory assistants and other project related staff are not paid for quite a long time. This can range from a few months to sometimes 2-3 years. The causes for this lie in the inefficient and careless handling of the funding and related paper work by the funding agency and the research organization equally. Interestingly the salaries of faculty, administration and other permanent personnel are almost never delayed. Often the funding agencies and the handling machinery is the same. But they are more careful about the permanently employed people and utmost careless about students and project staff. The reasons of course lie in the nuisance value. Permanent government employees have a history and tradition of having strong associations having fought for their rights and pay-scales. Students and short term project staff are the most unorganized sector workers and therefore get helplessly exploited in a number of ways and the higher ups in academia do not care to improve the situation.
However, the intellectual property angle of this problem remains unappreciated. The institutes hold the rights on intellectual property because they pay their employees regularly. But if that is not happening, the standard IPR norm collapses. Individuals who work without the promised pay can claim the first right on all the intellectual property generated during the time period of their work. The institute cannot claim it because it has not paid. Even if there is any kind of employment contract, failure to pay the salary in time is the breach of contract. If an agency is responsible for the breach of the contract, its right to claim any other benefits from the contract comes under question. The faculty, PIs of the project are getting their salaries regularly, so they cannot claim the IPR. Under such circumstances, the unpaid workers have the first right on IPRs. The institute cannot publish or patent the work or even include it in any of its official reports without the written permission of the unpaid workers. Research is a complex task and it is difficult to decide who contributed what. Often the persons actually doing the work know the subtleties of an experiment much better than their guides happily occupying their easy-chairs. Therefore the norms of IPR sharing need to be pre-decided. Violating these norms would potentially lead to unprecedented legal complications.
The simplest way to avoid any such complications is to pay all research personnel their due amounts in time. The reason this does not happen is lack of motivation to set the system right. There is no other reason that the funding system cannot correct itself. But as long as the institute does not lose anything by being careless, nobody will try to remove the system flaws. Therefore it is necessary that there are at least a few cases whether the unpaid researchers claim rights over the intellectual property and prevent the institute from using any of the research output.
The reason this has not happened so far is twofold. One is that this interpretation of the IPR norms is not known to anyone although logically it is quite straightforward. But even more important is that the sufferers do not have the muscle to challenge the system. They are in an insecure position and fear that the higher ups control all their career prospects. The academia is completely aristocratic in multiple ways. So most sufferers will not dare take any bluntly logical stand. But it is not impossible. I can imagine that someone has to leave when a project ends and there is no hope of continuing. For a person in such a position, there is no risk of losing anything anymore and such a person can block the entire output of the project that he/she was working and prevent the institute from using it in any way. Institutions and funding agencies need to be aware of this interpretation of the IPR law and correct the system in time without any excuse. If that doesn’t happen, the junior research personnel should come out and act. If not in the court of law, then they can come out in the public domain. Let people be aware of the reality. Let common man start asking questions to Directors, Chairs, Heads and PIs, “How many unpaid researchers work under you?” A whip from common man can be the ultimate motivator.
Recently I had an experience of an open peer review journal and it is certainly worth sharing.
As a student of behavior I have been thinking of the behavior of different players in the science publishing system and whether we can design a behaviorally sound system that would minimize biases , misconducts and irresponsible behaviors in the peer review system. I gave a talk on this in December 2017 and a preprint article in 2019. Interestingly in 2019 itself an open peer review journal called Qeios (pronounced like chaos) was started on very parallel (but not quite the same, some crucial components from my system missing) principles. Qeios started as a preprint repository but also has a peer reviewed publication system but with no dichotomous editorial decision involved. An AI system searches for and invites reviewers. Reviewers are informed that their reviews will be public. Authors can respond to them, improve their paper if needed but all these steps and stages will be completely transparent.
Any new journal has initial problems which this journal will also have for quite some time. Because of the impact factor illusion and the sheep mentality of researchers, a new journal is unlikely to get high quality papers in the initial years, which seems to be true for Qeios as expected. When I received a review request, I didn’t expect a high quality paper. What I received was not bad, it had some thought provoking ideas, but the work was not rigorous enough, being sort of aimless and not contributing any new and meaningful insights. At the same time it was Ok in terms of the model developed and some of the data and discussion.
Although I have been advocating open peer reviews for many years now, while preparing myself for an open peer review I realized something that I had failed to appreciate before. For a conventional journal I would have recommended rejection. I realized that it would have been a very dumb and non-productive end. The paper certainly had triggered some thinking in me. I thought of some new questions, some ideas, some tricky issues for the first time. It was not enough to make a full paper in itself, but it was worth something. If I expressed my doubts, my half baked ideas it would possibly stimulate someone else to think. They may have solutions that I couldn’t think of, or they might simply add more questions, may even point out that I was wrong. All this is a valuable process, not a product. But so far we were only publishing products and hiding the process. That is only half science. Open peer review journals can bring out at least part of the thinking process and much can be learnt from that. May be at a later stage I will return to my own arguments and develop them further. May be someone else does so. If I had recommended rejection, then nothing of this sort would have happened. Me too would have forgotten the issue in no time. Publishing the thinking process enriched me substantially. Of course not everyone will do this. There would be a lot of junk published. It is true that some people would only write goody -goody reviews and since it is published, add to their CV. This is happening. That might be inevitable garbage, but not a sufficient reason to block the valuable open thinking process. Here is the link to what I wrote as a reviewer (https://www.qeios.com/read/S390H3). The author responded to my comments, not with much rigor but that’s ok. Everyone has limitations. If it is in the open domain someone else can compensate for these limitations sometime and take the concept to a meaningful level. I hope people understand and realize the strengths of open peer review and adopt this practice increasingly. The starting problem of quality would vanish eventually.
If simulating modal human thinking is the goal of artificial intelligence then I would certify chatGPT is highly successful. It is as stupid as humans except that its stupidity comes much faster.
Having taken interviews, oral examinations, PhD or master’s thesis defense over a long time, I have been often struck by a class of students who talk a lot and give a lot of information without even touching the answer to the pointed question asked. Sometimes, and not rarely, they get away making a good impression. My interaction with chatGPT reminded me of such students.
I started exploring chatGPT after having postponed it for a long time. I thought AI would be better than humans in areas in which computers are obviously more efficient. That is in compiling large data, using quantitative methods, verifying an opinion against published evidence etc. But it doesn’t do anything like this. It doesn’t claim to do such things. It only repeats what has been said more often and by the elite, irrespective of whether it is true or not, logical or not, self contradicting or not.
I was particularly keen to see what it does when there is a contradiction between the belief in a field of science and actual data. So for obvious reasons I asked questions about diabetes. It started answering with the typical mainstream belief. When I pointed out that there was evidence going against it, it admitted that there is, but again started reiterating the beliefs. Then I pointed out that your statements are mutually contradicting, on which it admitted that there are contradictions and gave an excuse that the system is complex. When I gave specific evidence that shows his belief was wrong, he said yes there is evidence on the contrary but again went back to the belief. When I questioned what the evidence for this belief is, it said there is plenty of evidence from decades of research without giving any specific experiments or data as evidence. You can access the entire communication here (https://drive.google.com/file/d/1IhiFoscOWwD_cxhGguO0ssd7tkwmyiiW/view?usp=sharing).
This is precisely what people in this field have been doing. Everyone knows that multiple lines of evidence directly contradict the theory but they are not willing to give it up. They continue to live with circular logic, falsified hypotheses, internal contradictions and complete clinical failure but they will still continue with it, keep on cherry picking convenient findings and claim that they are doing science. It looks like AI is also an expert in this kind of self deception. This is fine, if your definition of intelligence itself is mimicking human thinking. But there is one important component missing. Human behavior has huge individual variability. While for modal personalities conformity is more important than logical soundness, there is a rare individual who craves for sound logic, someone who looks at data more than opinions. Science progresses by such outliers, and not by modal human thinking. If AI is like modal human intelligence and does not incorporate the outliers, it would prove to be retrogressive rather than progressive. People have already started using chatGPT in research. There are talks about AI doing peer reviews and all. Even if such a use of AI is officially banned, people are going to use it I have no doubt. This is the greatest risk. Using this kind of AI in science will only increase the conformity bias and make publishing disruptive thinking, surprising results, path breaking and paradigm shifting research increasingly more difficult.
One more step in my commitment to transparency in peer reviews
The dictionary meaning of “peer” is “a person who is of the same age or position in society as you.”
This meaning of peer is expected in peer reviews. Peer reviewers are not like “examiners” of a student candidate. They are supposed to be at the same level of scientific standing as the authors. Therefore the norms and responsibilities of science writing that are expected from the authors should be applicable for the peer reviewers too. For example, author of a scientific paper is expected to support every important statement by either citing reference from prior studies or by his/her own logic and evidence. The same rule should apply to a peer reviewer. But rather too often it is seen that peer reviewers make sweeping statements without even a minimal attempt to support them. More alarming is the fact that editors have no hesitation in accepting it simply because it comes from a reviewer and not from the author. With such double standards this cannot be called “peer review” by definition. The minimum standards applicable to authors should be applicable to reviewers as well and editors do not have the right to give differential treatments to authors and reviewers.
But reviewers make irresponsible comments quite often and get away with it because peer reviews remain confidential. Editors cannot reject a review because reviewers have become a rare commodity. Hardly anyone is willing to “waste” their time in reviewing others’ work for which they get no credit. If editors insist on higher standards from reviewers, getting manuscripts reviewed will become orders of magnitude more difficult. Therefore they are compelled to compromise on the quality of reviews. As a result, the review quality is degrading rapidly and nobody cares because they remain confidential.
I am copying below the correspondence that I recently had with the journal “Evolution”. Let me state very clearly that Evolution is a very respectable journal and I have no intention to single out this journal for criticism. I completely respect the editors, some of them being good friends of mine. But all journals suffer the same ailment. I just happen to have a fresh example from this journal, therefore I am making this correspondence public.
I am sure that the editors won’t like this. I may convert my friends into not so much of friends. It is just too natural if this act affects my chances of getting papers accepted henceforth. I also consider the possibility that the journal takes any kind of action on me officially. But that’s ok. I am doing this as a part of my commitment to transparency of peer reviews. If only a handful of authors start making the reviews public on their own, the review process will become more responsible in no time. But any researcher who has to worry about his/her career will not be able to do this. I can afford to do this because I don’t have any career. So I can keep the broader interest of science above career concerns.
Earlier with the same journal, I received a review request and I wrote back that I am ready to review if the journal agrees to make the review transparent. They did not agree and therefore I declined to review. My commitment is therefore independent of my position. I did not raise the issue because my paper was rejected.
I am pasting the entire correspondence of both the incidences with the journal Evolution below for the readers to evaluate and interpret themselves. Improving the quality of peer reviews is just too important to keep it under the rug.
13-Feb-2023 Dear Dr. Watve:
Thank you for submitting your manuscript, “Evolution of new variants of SARS-COV-2 during the pandemic: mutation limited or selection limited?” (22-0518) to Evolution. It has been evaluated by Associate Editor Dr. Maria E. Orive and two reviewers, whose comments are appended below. Unfortunately, these evaluations, as well as my own appraisal, indicate that your manuscript is not suitable for publication in Evolution.
I am sorry that the review process took so long. Although I realize you will be disappointed by this decision, I hope that you find the feedback useful for considering submission elsewhere or for planned future directions with the work.
We appreciate your interest in publishing in Evolution and hope that this decision will not discourage you from future submissions.
Associate Editor Comments to the Author: Reviewer 1 points out some very important issues with the approach taken by the analyses in this manuscript; the information known about the spread of variants of SARS-CoV-2 show that conditions given by the approach distinguishing between the hypotheses of selection versus mutation as the limiting factor for spread are not correct for such novel infectious diseases. As such, the approach taken appears to be fundamentally flawed.
Reviewer(s)’ Comments to Author: Reviewer: 1
Comments to the Author I do not think this paper is suitable for publication. Unless I have misunderstood something it puts forward a rather naïve approach for distinguishing between selection versus mutation being the limiting factor in the spread of variants of SARS-CoV-2, and it then uses this approach to infer from data that selection has been the primary limiting factor. This, despite the fact that we know this is probably not the case for all major variants that have spread, up to and including Omicron. Thus, to my reading the ms uses an approach that almost certainly can’t work for the data at hand to conclude something from the data that we know is almost certainly not true.
The problem with the approach is at least two-fold. First, the main focus of the approach assumes that immune-mediated selection is the primary selective factor but we know for novel host-pathogen associations like SARS-CoV-2 and humans, that the most important selective factor initially will be differences between humans and the ancestral host. When one incorporates that into a model one sees that the first several variants that spread will tend to be driven by strong selection for adaption to the new host, irrespective of immunity. So the conditions given for distinguishing between the hypotheses are not correct for such novel infectious diseases. And we know this is important in SARS-CoV-2 because Alpha and Delta (and probably Omicron) were all selectively advantageous regardless of the immune status of people in the populations. Second, the goal also seems to be to use the appearance of waves of infection in COVID as a potential signal of the underlying evolution but we know that this was not the case for many (perhaps most) waves for the first year or two. Instead, waves were almost entirely driven by changes in behavior mediated by public health measures and seasonality.
The problem with the conclusion of the ms is that we know for Alpha, Delta, and Omicron that selection was not the limiting factor. As mentioned above these variants were unconditionally advantageous, regardless of immune status. Further, the spread of these variants in a location was entirely migration limited. Each of them appeared in different geographic locations (England, India, South Africa) and then spread through migration to other countries. It was only once these variants arrived through migration that they increased locally, and they did so somewhat independently is different countries despite the countries having very different immunological histories. On top of that, each of these variants had a very unusual constellation of a large number of mutations, further arguing that mutational appearance was the most important limiting factor.
Incidentally, if one wanted to determine the role of selection versus mutation in the spread of variants then why not look at the phylogeographic data? If the process is mutation limited then won’t Delta variants across the globe, for example, have a common Delta ancestor? If instead selection is the limiting factor then, for example, Delta should spread at different times in different populations depending on their immune history, and they should not share a common Delta ancestor.
P2 – migration should also be mentioned and included as an important evolutionary factor, particularly for SARS-CoV-2.
P3 – I think the characterization of models that incorporate immunity is unfair here. There is a very large literature on this. Furthermore, immunity can be binary and still display gradual loss of immunity at the population level (which is probably what matters here).
I also think that, throughout the ms, there is a tendency to conflate the issue of the spread of a variant with the occurrence of a wave of infection. These are distinctly different things. For example, in many locations, when Alpha was initially increasing in frequency, the overall number of infections was decreasing (which is why many authorities were hesitant to impose lockdowns initially).
Comments to the Author All the arguments made here are sound but they are not in the least novel. The authors have ignored 30 years of work in strain dynamics to start from scratch in understanding a very basic feature of the emergence of novel variants within a standard epidemiological framework which applies to SARS-CoV-2. They conclude, correctly, that the new waves are not likely to be “driven” by the emergence of variants – but, I’m afraid to say, that is a result that has been in the literature for at least 30 years.
14th Feb 2023
Dear Dr. Mirium,
Thanks for your letter of rejection and comments by two referees. Rejection is an inevitable part of the publication game and we take it in a positive spirit. Simultaneously we have some queries for which we would like to seek answers and clarity about the editorial policy of Evolution.
1. What is the meaning of “peer” by your norms? By dictionary meaning, peer is at the same level of social or organizational status. Therefore the set of norms for authors and peers should be comparable. If authors are expected to support every statement by citing appropriate reference or with new evidence, aren’t peers also expected to do the same? Both the reviewers of our paper make many sweeping statements neither citing relevant literature not giving any evidence. If authors’ manuscripts can be rejected because of inadequate evidence, do the editors have a policy to reject reviewer’s comments based on the same set of norms? How frequently in the last few years the editors of your journal have rejected reviews if they do not comply with the same norm as authors?
2. Reviewer 1 says multiple times that “We know that this is not the case…..” “We know that ….selection was not the limiting factor” and so on but does not indicate the data on the basis of which we “know”. In the absence of any supporting rigorous study cited, this can be at the best taken as the reviewer’s belief. We independently searched but did not find any rigorous study supporting the reviewer’s multiple beliefs. If it is a case of belief versus evidence, does your journal go by belief or by evidence? Let us clearly know the journal norms.
3. The two reviewers starkly contradict each other. Reviewer 1 says that our arguments are not sound and reviewer 2 says that they are sound but not new, that is, they are already well established. This clearly shows the irreproducibility of peer reviews. Does the journal think that in order to increase reproducibility of science, first there needs to be a minimum level of reproducibility in peer reviews?
Kindly understand that we are not challenging the rejection or appealing a reconsideration. We accept the discretion of the editor. We are only seeking some clarity about the journal’s general norms on which manuscripts are accepted or rejected. In the broader interest of science it is necessary to ensure that the norms follow the minimum necessary principles of science.
One more earnest request: We believe in transparency of peer reviews. Since the manuscript is on preprints archive, we would also like the peer review reports and our response to it respectfully posted on it. We would like to have your consent to post the peer reviews on the preprint server or any other appropriate public domain. It would also be in the interest of science that this correspondence is made public appropriately and respectfully. Transparency is the first requirement of science and I believe you will not disagree. So kindly let us have a written consent to make the entire editorial process for this manuscript public.
Thanking you and awaiting your response eagerly.
Tue, Feb 14, 9:03 PM
to Tracey, me
I am forwarding your message to the Editor-in-Chief of Evolution.
Tracey Chapman (BIO – Staff)
Feb 23, 2023, 5:24 PM (7 days ago)
to email@example.com, me, Miriam
Decision on Manuscript ID 22-0518
Thank you for writing with your concerns and queries, which we have considered carefully. I also took a fresh look at your MS, the reviews and decision-making process. You raise three inter-related points: (i) assessment norms for authors vs reviewers; (ii) third party support for reviewer assertions; (iii) contrasting reviewer reports.
The reviewers of your MS gave different perspectives and contrasting reasons and assertions in their assessments. This is not unusual, and editors are trained to integrate sometimes disparate views, which, as in this case, may have varying levels of depth. They pick out the substantive, evidenced concerns, and down-weight others in order to come to a recommendation and then decision. Editors don’t ‘vote count’ but integrate the information according to professional standards https://academic.oup.com/pages/authoring/journals/preparing_your_manuscript/ethics?login=true#peer. We always aim to make it clear which are the key elements that feed into the final decision (though as reviews are supplied verbatim, that may not always be so apparent from an author’s perspective).
You are not challenging the decision on your MS, so I don’t comment in depth on any specifics. However, my assessment is that some substantive and legitimate concerns were raised by the first reviewer. For example, they described in some detail their concern, supported by observations, that the approach used could not distinguish between selection vs mutation. The expert AE comments specifically on this main point of concern to justify their decision and I agree with this assessment. The second review, written in somewhat stark terms I agree, was not central to the final decision (had the outcome had been different, we would have asked you to counter it). Overall, I find no concerns with the quality of decision-making rendered on your MS.
You also ask that we consent to make the entire editorial process for this manuscript public. This journal uses double-anonymised peer review, rather than a fully open peer review process, to minimise the influence of well-known unconscious biases on decision making (e.g. Ware et al. Info Serv & Use 28 (2008) 109–112). As part of this process, reviewers are informed that all communications regarding the manuscript are privileged, so sharing their reviews without their permission would raise potential ethics concerns. Therefore, we ask that you respect this confidentiality if you can.
Thank you again for raising your concerns, which we do appreciate,
Prof Tracey Chapman | School of Biological Sciences
Editor in Chief, Evolution
Feb 24, 2023, 9:30 AM (6 days ago)
to Tracey, Miriam, firstname.lastname@example.org
Thanks for your valuable response.
Since, as you agree and as so many studies unanimously demonstrate, the peer review process is inherently and seriously biased, any debate regarding the fundamentals of the process are most welcome and in fact badly needed.
I had asked three questions and you appear to have not replied to two of them.
I asked whether there are different standards for authors and reviewers by your journal norms. For example the authors are expected to support every statement they make, but reviewers are allowed to make sweeping statements without support. The answer to this question could only be yes or no. No other wording can be an appropriate answer. I also asked if there is a situation of belief versus evidence, what do your journal norms go by? I also did not find any answer to this.
Whether rejection to our paper is justified or not was not the question at all, and you have answered the unasked question quite elaborately.
I thank you for answering the third question that if the two referees contradict each other, the editors use their own discretion. In this you have also clarified what I asked. Editors do reject some of the reviews. The further natural question is whether the reviewers are conveyed your rejection as you do it for authors? If not, again do your journal norms permit different standards for authors and reviewers. It’s ok if the answer is “yes”. Only clarity and transparency in the norms is what I request.
I beg to differ with the last point. Double blind reviews are psychologically impossible. The moment a reviewer sees a manuscript his/her mind immediately starts guessing who it could be. This is human nature and can’t be surpassed. I have worked as a reviewer multiple times and asked so many other reviewers as well. In small fields it is frequently possible to guess. Further, preprint practice directly contradicts and nullifies the attempt to conceal the identity of authors. Since preprint is a well accepted practice, double blind peer review remains only a pretense. Transparent peer reviews is the only option that will improve the peer review process.
Since I am a strong promoter and supporter of transparent peer reviews, I am afraid I will make all attempts to make this correspondence public. I am a small man and nobody would notice it. But I have to remain honest to my principles. The journal is welcome to take any legal action against me if necessary.
I will also continue to communicate manuscripts to your journal, and if you decide to reject them on the grounds that I don’t agree with the confidentiality norms, kindly reject them clearly stating this reason.
My thousand apologies for all the trouble, but this is a necessary trouble in the broader interest of science.
The darkness on the path to truth is my homeground. If the dazzling lights in the rest of the world are under your command, why should I care!!
Earlier correspondence related to me declining to review for the lack of transparency
Thu, Dec 8, 2022, 11:10 AM
Dear Dr. Watve:
Thank you for replying to my invitation to review for Evolution.
It is unfortunate that you are unable to review this manuscript at this time. I will keep you in mind when future manuscripts come in that fall under your area of expertise.
If you have a suggestion for an alternate reviewer and did not provide this on the site, please email the editorial office at email@example.com with your suggestion. Please also include the manuscript number EVO-22-0577.
Dr. N. G. Prasad Associate Editor, Evolution
I wrote the following to the managing editor address.
I received a request to review a manuscript from your journal.
I am declining for the following reasons.
After being grossly disappointed with mainstream science, I have been working with undergraduate students, farmers, tribes and other people doing real and truely enjoyable science. While doing so I feel the hypocrisy of mainstream science even more. Thinking over a few years I have reached certain decisions.
I have been fighting for transparent peer reviews for quite a few years now. As an author, I am at the receiving end and can do little about the system. But whenever requested to be on the other side of the table, now I will demand a change in system. I will do reviewer or guest editorial jobs only if the journal is ready to make the entire process transparent. What I mean is that all reviews, comments and decisions are made public by appending them to the preprint on a public access server. This should be independent of acceptance rejection. With the opaque and therefore dubious peer review system, I think I should not waste my time because as it is peer reviews do no good to science.
I request your journal to rethink about the peer review system. If that happens I will give my 100% to it. If not, kindly do not request my inputs for any manuscript hereafter.
After quite a few years I attended a conference, ISEB4, the 4th meeting of the Indian Society for Evolutionary Biology in Ahmadabad last week. It was a refreshing experience particularly watching the younger generation of evolutionary biologists coming up. Also promising were the new centers of evolutionary biology and ecology coming up at different Universities and Institutions in India.
But simultaneously I could hear a subtle alarm bell. Things are moving ahead, no doubt, but are they getting too stereotyped?
Yes, unfortunately they are. The career paths of science, the definitions of success, the paucity of positions, the necessity and the ways of building an “impressive” CV, the modalities of getting funded are all getting increasingly narrower. Bottlenecks are getting too tight and that, down the line in a just a few years would be largely counterproductive. We see that most youngsters stick to the line of work they followed in their PhD, which was a continuation of their parent lab’s already established line of work. They take a small variation of the same line as their plan of work. Following a narrow field of work for several academic generations is not detrimental to science, but doing only this and not looking beyond certainly is. Nobody ventures into something new because for recruitment or for funding they are asked to present a research plan, then they are asked what experience do you have in this, how do you know it would succeed? So only sticking to the old is safer and that works. Novel ideas fail at every stage. This peril is brought about by the stereotyped and ritualistic career paths. The antidote is that the career paths need to diversify in more than one ways.
I use the word diversify in multiple dimensions. Diversifying the field of investigation, the model organism or the model system, use of novel and diverse tools as per the need of the problem, addressing novel and entirely virgin questions, diversifying the system and including more kind of people in science; all this will only enrich science. I don’t think anybody will disagree with this. But what prevents diversification is the ritualistic career path. The system and the criteria for evaluating a researcher for recruitment, for promotions and for funding have become rigid, ritualistic and often non-sense. The minimum requirement for a faculty position is PhD, three years or more of post-doc, an impressive number of publications, at least a few of them being in flag ship journals. Do these qualifications guarantee a good candidate? The answer is no. A good CV might just be because of the luck in getting in a good lab. Does not having such a CV indicate lack of talent? Again the answer is no. But institutions look at on paper qualifications more than the talent, capacity and dedication.
My doubts about this ritualistic requirement are too fundamental. In the field of literature, everybody does not need to write a novel. Short stories and even three liner haikus bring equal respect if they are of an appealing quality. But in science, only a thesis gets you PhD and without PhD a career in research is impossible. A two page theorem can potentially revolutionize a field of science, but that cannot make a thesis and therefore cannot make a career by the current stereotype. Since people only look at the number of papers and impact factors, reading one’s research with interest has almost lost its relevance. What kind of science is published in big journals and who can publish in big journals is also more of a tactical issue than a quality issue. Getting a position and getting funded is also more tactical. Only a stereotyped format is expected in a funding proposal. I can see young researchers thinking too much about these tactics and ignoring their own natural questions, attractive curious findings, urge to explore and zeal to find something really novel. High risk disruptive ideas are most unlikely to get funded. As a result novelty, exploration, disruptive elements, rigor, insights, paradigm shifts have largely taken a back seat. Research is being sold by gallons rather than by taste and flavor. The perils of this ritualized science are increasingly coming to light in terms of scientific misconduct, reproducibility crisis, peer review biases, stress-anxiety and suicide rates among students.
Changing the mainstream is almost impossible, but that’s not what I am suggesting. My solution is to support alternative models of doing science. Why should PhD be the only path for getting into research? Why can’t housewives do meaningful and fruitful research? Why can’t farmers solve their own problems by participating into research? Why can’t Universities join hand with citizen forums to address important questions? Throughout my entire life I have experienced the alternative routes, experimented with and demonstrated that they work. The alternative models of doing research are like collaterals in heart disease which keep the heart pumping even if a main vessel is blocked.
While clear pathological signs are accumulating in mainstream science, we need to focus on development of collaterals. Who can do this? I believe the new upcoming academic models such as private Universities, autonomous colleges, liberal arts curricula, citizen science forums can do this and they need to be strengthened. If these models take pride in mimicking the mainstream again, they would simply join the mass sugar coated collapse of academia. Their real strength lies in doing what mainstream is NOT doing. Asking questions which the mainstream is not asking. Tapping new possibilities and trying out crazy ideas. Being open to failures and traverse new paths. I am trying to search for an element of such vision in these alternative models. I do see some vision in some of them and hoping it develops into a fruitful diversity of models of practicing science.
The diversity of research models will fill in the big voids that currently the mainstream science is unable to cover. Most mainstream scientists are too specialized, narrow minded and fail to visualize bigger and comprehensive pictures. Most have been generating huge amounts of data that haven’t been interpreted collectively and comprehensively. Main stream typically fails to look at alternative possibilities and interpretations which are more likely to come from non-career-minded thinkers. Owing to the rat race for high impact publishing main stream rats seldom try to test reproducibility of results. Spending time in testing reproducibility is a waste of time for them because that is unlikely to get a high impact publication. Since funding is highly biased towards a few buzz words, mainstream does not go beyond trend following. The other alternative models can do such things which are extremely valuable for science and the society. If we fail to develop these collaterals, the mainstream will continue its downfall until common people lose trust in science. Technologies and even basic science will be completely monopolized by a handful of agencies and companies (this is already happening in science publishing) and people will be completely alienated from science.
If I had continued to be in IISER, I would have retired by this month end. I decided to quit academia exactly four years ago and taking some time to wind up, my last day in a formal academic institute was 31st march 2019.
When I decided and declared my plan to quit academia, there were a wide variety of responses. I know that many would have felt happy but did not say that, at least to me. The responses that reached me were sad, surprised, shocked, disagreeing, pursuing me to change my mind, wishing for a better future and all. An old friend called and told me that I was committing suicide. I will have no future in science by quitting academics. Perhaps he, like many others, thought that science can survive only in academia. “Leaving academia is like killing oneself and one’s science.” he said. I could only reply “Let’s us experiment and see”. So my suicide was experimental. Now after almost 4 years of the continued experiment, I can ask myself, “Did I really commit suicide?” Was it an end of my creative, active, productive years in science measured by any standard?
Let us begin with the stereotyped conventional standard measures, in spite of my repeatedly expressed view that these are useless and often counterproductive measures. That is, the number of publications, impact factors etc. In the less than four years of non-academic, non-career focused pursuit of science, I (and my loosely knit and highly dynamic team) published 17 papers, 5 more are in preprints and under review, revision or resubmission. Seventeen more are in various stages of development out of which 11 have been penned down partially (list of all the 34 at the end). Quite often new ideas come from interactions with students or others and they jump up to high priority. The 17 that currently fall in line might be bypassed by more novel and attractive concepts. My average outcome of papers in the last 4 years is slightly greater than my own average when in service. Recently I came across this article in Science (https://www.science.org/doi/10.1126/sciadv.abq7056) that gives the average output of a faculty in the US. My average output as a citizen outside mainstream academia is 2-3 fold greater than that. For those who believe in impact factors I published in fairly good impact journals including Behavioral and Brain Sciences, Conservation Biology, Evolution, Scientific Reports-Nature and PLOS One. The only limiting factor was author charges, which puts a limit on how many papers I can publish in such journals. In spite of this limit, going by conventional standards I did not perform lower than the faculty average anywhere in the world.
But not everything is captured by the conventional standards. What I value more is the range of subjects that I could explore, some of which was possible ONLY because I did NOT work in an Institute or did not identify myself with the mainstream academics. The mainstream has three major hurdles or limitations. One is the heavy peer pressure owing to which it is too tough to think out of the box and consider possibilities other than the current trends in the mainstream. I feel the difference since I have experienced working both within and out of the system. In my experience, being free from peer pressure opens up the mind far wide and encourages alternative ideas and visions. Even more important than that, I can afford to be more honest now. Being in the system it is extremely important to worry about political correctness and not to hurt your potential funders, editors and reviewers. I feel free now to expose and criticize whatever I see as wrong, unfair or unethical. An academic aspiring for a successful career needs to inculcate cowardice. Success is not possible without that.
The second limitation is the bureaucratic framework in which you need to fit all your research. The bureaucratic procedures are made for a certain type of, mainly lab oriented research. Other areas of research have different needs. The institutional structure is intended to support science and therefore should have sufficient flexibility to support different needs of different types of research. But in reality the institutions say, “This is our framework and it shall nor bend. You bend your science to fit in, otherwise fXXX XXX”. (That’s what I actually did when faced with such a situation.) Having no funding is often better than having funding already allotted to something. Research can take unexpected paths anytime and you need to change everything when it does so. A pre-budgeted funding doesn’t allow you to do so.
The third barrier is the formalism; only things done a certain way are called science. The same thing done or written in a slightly different format is not considered science. This package of limitations comes along with the institutional support, so the support itself becomes the prison. I clearly experienced that being out of the prison improved my thinking, opened up the skies, eliminated rituals and reduced the conflicts of interest, biases and prejudices typical of mainstream science.
Of course having no money in hand puts a different set of limitations. So for obvious reasons, the nature of my research changed. I have no lab to do any desk work now. But the fields and forests out there and the human society are two open labs of huge dimensions. You don’t need any committee pre-approvals, pre-registration or any formality to start working on anything. Developing concepts and making simple models addressing novel questions and exploring new ideas does not need money. In addition, in today’s science huge amount of data are lying in the public domain largely unused and un-interpreted or more often misinterpreted. Most institutional scientists are data generators, not insight generators. Generating useful insights from already available data also does not need any funding. Anyone outside the academia can certainly do more insightful science using data generated by the academic labor contractors.
There is a neo-brahminical notion in the mainstream that only whatever is published in peer reviewed journals is science. This notion persists in spite of so many studies and experiments showing the flaws, biases, power play and academic racism in the peer review system. The science under the open sky is free of such flaws. I continued to publish in peer reviewed journals at times because the short sighted academics won’t understand the science outside. But now I also write original research findings directly for laypeople. In the last 4 years I wrote three science books in Marathi and 6 more book concepts are developing. These books are not only science simplified in local languages. It has novel research outputs and novel concepts being discussed for the first time anywhere. I have lost count of articles I wrote in news papers and magazines.
Why publish in Marathi? For the simple reason that the kind of science I do can be better expressed in people’s language. A language comes with its own set of ideas, imaginations, fantasies and metaphors not necessarily shared by other languages. Fantasies are important for science, as exemplified by the August Kekule’s story about the origin of the concept of cyclic compounds. I have myself used very carefully concepts from Indian mythology in churning out novel interpretations of experimental data. Diversity of cultures, languages, mythologies will enrich the input of ideas in science further. So it makes sense to do original science in different languages. I am using Marathi more frequently now. It may eventually get translated in English if and when anyone wants them in that language.
Whom did I work with? One person can hardly do anything. Team work is essential. Obviously I had no PhD students registering with me anymore. I am no more teaching any undergrad courses. But somehow a few undergraduate students as well as PhD students registered with someone else kept on coming to me to enjoy working together. The advantage now is that they come from multiple colleges and institutions. My former co-workers continued to be with me independent of their jobs. I also worked with social workers, NGOs, farmers and tribesmen; having no formal training in science; some being illiterate or marginally literate and they have been my co-authors in papers published in prestigious journals too. Being illiterate should not prevent one from being a productive researcher and I could demonstrate this happening in reality. This would have been impossible in an institutional framework that relies on paper qualifications rather than the inner capabilities and drives of a candidate while recruiting research personnel. So I presume there will not be dearth of co-workers in pursuing science outside academia. In fact you get better personnel outside because there is no formal qualification requirement.
I am not writing this as a self appraisal. A self appraisal was necessary as a part of the academic routine in IISER, which I hated like most others. So why should I do it now when there is no compulsion. I am not writing this to say how great I am. I am not; I only followed a different path. It’s only meant to demonstrate that good quality science can be done outside mainstream academia. I will urge science personnel more talented than me to try out this model of doing science. I am sure there are more capable researchers than myself and they can make this model even more productive. The monopoly of academia needs to be broken. I don’t mean that we wind up all Universities and Institutions but we need to take active research beyond the bounds of academia and we should see more of common people contributing to important and novel areas of science that the academia just cannot reach.
In academia, we see a trend in the reverse direction. Scientific misconduct, reproducibility crisis, race- gender and other discriminations, peer review biases, embarrassingly growing retractions, stress-anxiety-suicide in students are increasingly coming to light. No sound, effective and durable solutions to such problems are coming forward. This is mainly because of badly designed academic systems. Systems thinking is rare in the field although there is intensive research in systems thinking focused on fields other than academia. These researchers do not seem to be doing anything to mend their own house. Changing the academic systems seems too improbable in near future mainly because people in academics have never experienced alternative systems. So they remain short sighted. I see one effective way out. I have experimented on it and demonstrated that it works. Liberating science from academia is my solution. This is badly needed for ensuring a future of more inclusive, more open minded and less prejudiced healthy science.
Papers published after quitting formal academics:
I can say that in 7 out of the 17, my earlier time at IISER has some contribution.
Shinde, S., Patwardhan, A. & Watve, M. (2022) The ratio versus difference optimization and its implications for optimality theory. Evolution, 76, 2272-2280. https://doi.org/10.1111/evo.14605
Ojha, A., Watve, M. (2022). The predictive value of Kuhn’s anomaly and crisis: the case of type 2 diabetes. Academia Letters, Article 5494. https://doi.org/10.20935/AL5494.
Watve M, Watve M. (2022) Tradition–invention dichotomy and optimization in the field of science. Behavioral and Brain Sciences Nov 10;45:e272. doi: 10.1017/S0140525X22001236.
Kharate, R.; Watve, M. (2022) Covid 19: Did Preventive Restrictions Work? Curr Sci, 122, 1081-85.
Vidwans Harshada, Kharate Rohini and Watve Milind (2021) Probability ratio or difference: How do people perceive risk? Resonance, 26, 1559-1565.
Ulfat Baig et al (2021) Phylogenetic diversity and activity screening of cultivable actinobacteria isolated from marine sponges and associated environments from the western coast of India. Access Microbiology, 2021;3:000242. DOI 10.1099/acmi.0.000242.
Shinde S, Ranade P, Watve M. (2021). Evaluating alternative hypotheses to explain the downward trend in the indices of the COVID-19 pandemic death rate. PeerJ 9:e11150 DOI 10.7717/peerj.11150.
Poorva Joshi, Neelesh Dahanukar, Shankar Bharade, Vijay Dethe, Smita Dethe, Neha Bhandare and Milind Watve. (2021) Combining payment for crop damages and reward for productivity to address wildlife conflict. Conservation Biology, 2021, 1- https://doi.org/10.1111/cobi.13746. (This paper became the Editor’s pick for Conserv. Biol. Dec 2021 issue). This work also received an award from Society for Conservation Biology.
Patil P, Lalwani P, Vidwans H, Kulkarni S, Bais D, Diwekar-Joshi M, et al. (2021) A multidimensional functional fitness score has a stronger association with type 2 diabetes than obesity parameters in cross sectional data. PLoS ONE 16(2): e0245093. https://doi.org/10.1371/ journal.pone.0245093
2. Akanksha Ojha, Harshada Vidwans, Milind Watve Does sugar control arrest complications in type 2 diabetes? Examining rigor in statistical and causal inference in clinical trials https://doi.org/10.1101/2022.08.02.22278347
Our manuscript was rejected by BMJ Evidence Based Medicine. The paper was examining data from 6 large clinical trials, evaluating the trial design, statistical rigor and inferential logic used to reach conclusions. We also reexamined the methodology used in published meta-analyses of these trials. The result was astonishing. Together there was no evidence that regulating glucose reduces the incidence of diabetic complications. This is important because glucose regulation is the main, perhaps the only focus of diabetes treatment at present. Our analysis showed that the entire line of treatment which has currently something like a trillion dollar turnover is without a sound evidence base. The preprint of Medrxiv is here https://www.medrxiv.org/content/10.1101/2022.08.02.22278347v1.
The journal rejected the manuscript within about 6 hours of submission saying that it did not achieve “a high priority score”. They say the rejection is not based on the quality of the paper. Only on a judgment of priority. This has interesting and far reaching implications for the field of type 2 diabetes on the one hand and for the science publishing system on the other.
What our analysis mainly found was the following
All the papers have multiple and serious statistical flaws. The main being that when a large number of statistical comparisons are made, some are bound to come out to be significant by chance alone. For this a correction called Bonferroni correction needs to be made. None of the trials do this. If we apply this correction to their data, nothing remains statistically significant. Bonferroni correction is more conservative. Therefore we also suggested an alternative based on simulations, but even with this approach, none of the claimed benefits of treatment turn out to be really statistically significant.
The clinical trial that is believed to have shown the benefits of sugar regulation, the UKPDS does not have a placebo control. Other trials that have placebo or blinding of some kind do not show as many benefits as the UKPDS. Therefore the assumed positive effects of glucose regulation are likely to be placebo alone. Then there is a second level of placebo in trials with surrogate end points such as glucose. The feeling that my glucose is in better control is likely to exert a placebo effect at a different level and none of the trials has appropriate controls for this.
Even if we assume that the marginal benefits are true, it cannot be inferred that it was because of sugar normalization. Insulin has so many other functions in the body. Some of the anti-diabetic drugs also have other sites of action independent of glucose. So the inference that these marginal benefits are because of sugar normalization has no support.
The magnitude of difference is so small that it is clinically meaningless. Even if we take only the favorable results and assume them to be true, 10 patients will have to be treated for 25 years each in order to prevent one diabetic complication in one person.
All this is crystal clear from the data and it is high time we give up glucose normalization as the main focus of diabetes treatment. But beliefs appear to matter more than data in medicine. Since the analysis showed something against their belief, how can they publish it? Whatever the quality of data, analysis and arguments!!
Now since I have no formal career in science, the rejection will not affect me. On the contrary I am more delighted for having one more sample to understand how the secretive editorial machinery works. The rejection was so fast that the chance that anything in the manuscript would have been read seriously is simply out of question. The important thing to be read is only from where the paper comes. If the authors and their affiliation is obscure, there is no need to read anything further.
But what is more interesting is the reason given for the rejection. By saying, this was not a high priority issue, the journal admits that either type 2 diabetes is not a high priority disease or being statistically sound is not at a high priority, being critical about the nature of evidence is not a high priority issue or questioning a line of treatment based on absence of supportive evidence is not a high priority concern for the journal called EVIDENCE BASED MEDICINE.
The actual correspondence is pasted as it is below.
Dear Dr. Watve,
Manuscript ID bmjebm-2022-112095 – “Does sugar control arrest complications in type 2 diabetes? Examining rigor in statistical and causal inference in clinical trials.”
I write you in regards to the manuscript above.
We are sorry to say that we are unable to accept it for publication, as it did not achieve a high enough priority score to enable it to be published in BMJ Evidence-Based Medicine. We have not sent this manuscript for external peer review as in our experience this is unlikely to alter the chances of ultimate acceptance. We are keen to provide authors with a prompt decision to allow them to submit elsewhere without unnecessary delay.
Our decision may be disappointing, especially in view of the lack of a detailed critique. This decision must be based not only on quality but also timeliness and priority against other subject areas.
Thank you for considering BMJ Evidence-Based Medicine for the publication of your research. I hope the outcome of this specific submission will not discourage you from the submission of future manuscripts.
Dr. Juan Franco
Editor in Chief, BMJ Evidence-Based Medicine
Dec 8, 2022
Dear Dr. Franco
Thanks for your prompt response. I just have one request. I would like to have your consent to quote your reply in any article/blog/comments on preprints.
Let me also tell you in what context I would like to quote your reply.
Our paper pointed out many statistical and inferential flaws in a series of clinical trials on glucose normalization treatment to prevent diabetic complications. The nature of the flaws is such that their conclusions become completely invalid. The current practice of type 2 diabetes treatment becomes questionable. All this is supported by sound analysis of data from systematically selected 6 large clinical trials.
Your response says our arguments “did not achieve a high enough priority score” which implies that
1. Being statistically sound is not a high priority for you.
2. Being critical about the evidence base is not a high priority for you.
3. Questioning a line of treatment based on absence of supportive evidence is not a high priority for you.
Kindly let me know your consent to quote your email in this context.
I believe in complete transparency of the editorial process and therefore expect cooperation from you in this regard.
The journal eLife has declared its new peer review policy which is a bold experiment in science communication. In a recent editorial (https://elifesciences.org/articles/83889) they declare that eLife will not use peer reviews for a dichotomous decision of accept-reject. Instead they will publish every paper that they choose to review along with the reviewers’ comments. Further they say that the authors will have sufficient freedom to use this peer review to publish elsewhere etc.
To some extent, this is precisely what I had been saying for quite some time (but with important differences). Of course, many others would have thought that way. As the article says, “Nobody who interacts with the current publishing system thinks it works well, and we all recognize that the way we use it impedes scientific progress”. Since there appears to be a consensus that the current system of science publishing is deeply flawed, there need to be alternative models and there have been some experiments on alternative publishing models. But one thing is lacking.
Science is a human endeavor and therefore is clearly subject to principles of human behavior. Any new system being designed needs to be based on our understanding of behavior. If it is based only on ideology, but ignores behavior, it is bound to fail in realizing its objectives. It may become stable and popular but that is not the measure of success. How far it serves the original purpose should be the measure of success. The central question is how to design a system that will serve the purpose, given the behavioral choices of all stakeholders in the field. The reason why the existing system is flawed lies in people’s behavior and a new system can also get easily corrupt for the same reason. It is therefore necessary to analyze the reasons behind the problems in the current system and see whether we have been addressing these problems in designing a new system. I have written a detailed article which is available as a preprint for the last 5 years (https://ecoevorxiv.org/nvpe2/). As expected in the article itself, this couldn’t have been published by a traditional journal, and the prediction has turned out to be correct so far. In these 5 years my analysis of the behavior of scientific community has gone much ahead. Here I will only mention a couple of behaviorally important points relevant to the eLife experiment that has begun.
The committees that decide recruitments, promotions or funding look at where a candidate has published rather than what is published. This is not without reason. The journal names and impact factors save them the cost of reading. Reading incurs substantial cost. IFs are popular only because they save the cost of reading. There can be an inexpensive pretense of evaluation without evaluating anything. So although IFs are not scientifically sound, they are behaviorally profitable and therefore the committees will not give up on them easily. The eLife’s stand of replacing the accept reject-decision by publishing peer reviews will compel the committee members to read research, and they will be most reluctant to do so. For over 2-3 decades, committee members are addicted to the ‘evaluate without reading’ package and de-addiction is not going to be easy.
The accept-reject decision cannot be replaced as long as the prestige of journal matters. The more prestigious journals will be overburdened with submissions and they can review only a limited number. So desk rejection will become even more important and there all the biases caused by the dichotomous decision will return in perhaps a worst form. eLife itself says “We will publish every paper that we send out for review”, which means a large number will be rejected without giving reason at some one’s vim. This decision is bound to be guided by private cost benefits of the editor which is not going to eliminate the existing biases.
There is one more potential contradiction. The elites of science control most of the prestigious journals. Therefore they are not so unhappy about conventional peer review systems. Peer reviews have biases by gender, country, race, reputation etc. So the underprivileged class of science will find the open peer review system more beneficial. But mostly the underprivileged are also poorly funded. They will not be able to afford the author charges of 2000 dollars per paper. So the change may not benefit the ones who are looking for a change. The journal has a facility of waiving charges, but how efficiently it works will decide everything. It is quite likely that the profitability or even sustainability of the journal will be compromised if waivers are really given to everyone who needs. There are more problems with the suggested change. But nevertheless, any experiment is welcome. The risk in doing such experiments without sufficient thinking is that failure of such experiments will further strengthen the flawed system once again. Therefore it is necessary to design behavior based systems right away. In economics and management, designing behavior informed systems is not a new concept. There is substantial research on it. Why not utilize this in science? And if the field of science itself fails to use novel scientific concepts, who else will?
After a delay of 6 months, the journal PLOS One returned our manuscript saying that they could not find an academic editor and reviewers for our manuscript. PLOS One is a fairly open minded journal and has a team of editors representing wide diversity of fields. That’s why this kind of response is quite surprising. This is only for the second time in my life I received this response. Earlier incident was with the journal Biology Direct. Are the two incidents only a matter of rare chance? Or are there any specific reasons to it?
One thing common about both is that both were about diabetes, highlighting models that are at substantial deviation from the prevalent mainstream thinking in the field. I think there lays the reason.
What was our paper about? It pointed out a large number of anomalies in the prevalent theory of glucose dysregulation in type 2 diabetes. It listed dozens of mismatches between the theory and an array of reproducible experimental or epidemiological findings. It also suggested an alternative model that could account for almost every anomaly in a coherent thread of logic. Classically type 2 diabetes is believed to result from an elusive concept of “insulin resistance” and inadequate compensatory insulin response. We, on the other hand assumed with sufficient evidence in hand that diabetes begins with vasculopathy. Because of deficient vasculature there is inadequate and defective glucose transport to the brain which makes the brain deficient in glucose. Deprived of sufficient glucose, the brain instructs the liver to release more glucose in blood. Vasculopathy is long known to be a characteristic of diabetes but the thinking was that chronic rise in glucose is the cause of vasculopathy. We are saying the reverse, vasculopathy the cause of rise in sugar. There is clear demonstration that transport of glucose from blood to brain is reduced prior to hyperglycemia. Further, ALL the experimental and epidemiological patterns not explained by the insulin resistance theory are explained with complete coherence by the “vasculopathy first” model. Therefore the alternative model looks more promising. There also exists published evidence that early signs of vasculopathy are seen much prior to hyperglycemia.
The catch is, if we accept the alternative model, the entire line of treatment of diabetes will become completely redundant. That would lead to collapse of a trillion dollar business. But that is much ahead in the sequence. Right now we are not over-claiming. We only say in this paper that the alternative model explains almost all the anomalies and therefore needs to be considered seriously and trigger research on a new line.
How do researchers in a field react to a finding, hypothesis, model or synthesis that directly contradicts the prevalent theory? You would expect them to critically view the new finding, may be find flaws in the argument, aggressively criticize, debate and so on. I am ready to believe that a welcome response is highly unlikely. It would be natural to expect heavy criticism. This might happen if the new argument is inherently flawed and it is easy to find the flaws in it. But what if the prevalent theory itself is flawed and the new argument it substantially stronger and sound in terms of logic, mathematics and evidence?
From repeated experience I know what a typical response of scientists is, particularly from the field of biomedicine. They prefer to keep mum. They neither accept nor reject any disruptive thinking or evidence. They pretend that they just haven’t heard of it. Criticism can be replied to. A debate is likely to take a logical path so that ultimately truth will prevail with a good chance, if not every time. But the strategy that always defeats novel thinking is “silence”. When the giants in a community have vested interests in a prevalent theory and someone makes a sound case that it is wrong, they just keep mum, pretend that nobody said anything; they did not hear anyone saying anything. In the days of hierarchical structure of science publishing this strategy can perhaps never be defeated. The giants in the field can block the new thought from getting published in the flagship journals of the field. They don’t care if it gets published anywhere else because they know nobody reads research anyway. Research is propagated only through a handful of journals; that too only the through the titles and abstracts. Rarely if ever, research papers are read completely. So often the data in the paper contradicts the statements in the abstract. But everyone reads only the abstract and therefore truth remains masked. If we point out stark difference in the data and the conclusions in a paper, the journal is guaranteed to not respond.
This is not different in principle, from the responses of researchers to a disruptive idea described by Thomas Kuhn, albeit two major differences. One is that of difference in culture of the research fields. Kuhn mostly talked about physics in which ideas are debated. Debate is not in the culture of biomedicine. They have smarter ways to suppress alternative thinking. The second difference is that Kuhn wrote when peer review was not a mandatory norm in science publishing. Now peer review is another weapon by which any upcoming thought can be swiftly killed. And you need not waste any time in reading and commenting as well. Just decline to handle the manuscript and that is enough!! Here is our manuscript in a preprint form (https://www.biorxiv.org/content/10.1101/2022.01.19.477014v1) and see below the correspondence with the editors.
Fri, Aug 12, 6:17 PM
Dear Dr. Watve,
I am writing with the difficult news that we have not been able to secure an Academic Editor to handle your manuscript “Hyperglycemia in type 2 diabetes: physiological and clinical implications of a brain centered model” (PONE-D-22-04305). Additionally, we have been unable to secure feedback from peer reviewers. We have therefore reluctantly decided that we must return your manuscript to you without review.
I recognize that this decision will be frustrating — it is our desire to provide every suitable manuscript the opportunity for review and evaluation by experts in the research community — and I sincerely apologize that we have not been able to do so in this case. We have exhausted the pool of potential PLOS ONE Academic Editors qualified to handle your manuscript but have not been able to secure a commitment to handle the submission. We have also invited a number of peer reviewers with relevant expertise, but we have not been able to secure the reviews required to support an editorial decision. We are withdrawing your manuscript from consideration to prevent further delays in the assessment of your submission, and so that you can move forward immediately if you choose to submit your work elsewhere.
Again, I am very sorry not to have more positive news for you. I wish you the best in finding an alternative venue for this work.
Best regards, Emily Chenette Editor-in-Chief PLOS ONE
Sun, Aug 14, 10:23 AM
to PLOS, bcc: Akanksha
I understand the agonies of editors. No issues. But I have one request.
I would like to have your consent to post this letter in the public domain. It is very likely to be a remarkable event in the history of science and students of the history and philosophy of science need to have access to this information. How people in a field react to a paper challenging an existing dogma is a very important question in the history and philosophy of science and making this letter public is extremely essential. Therefore I want to append it to the preprint, as well as write an article about it on my blog on which I have often written about problems in science and science publishing. Link here if you want to view it (https://milindwatve.in/)
We appreciate you reaching out and will be back in touch shortly.
All the best,
Case 07687026 PL#0N3_AR ref:_00DU0Ifis._5004P1ymxT7:ref
Mon, Aug 29, 9:20 PM (13 hours ago)
This is to inform you that since I did not get any reply from you for over two weeks, I am assuming that you have no objection if I publish your letter in any appropriate context, in a respectful manner.
I am happy to receive a rejection to my manuscript. I wrote a MS about the biases in peer reviews, how some basic principles of human behaviour create these biases and suggested a behaviour based system design for scientific publishing that would minimize, if not eliminate biases. Anticipating that criticising peer reviews would create controversies, I communicated this MS to the Journal of Controversial Ideas (JCI). After almost one year I received a rejection. One of the main reasons for the rejection is that this is not a controversial issue at all. “The idea that peer review is flawed and creates bad incentives is widely held by academics.”
This is unique experience. The paper is rejected because the peer reviewers agree that peer review is itself a bad idea. The paper is rejected because peer reviewers agree with one of my main arguments. They do contradict and strongly disagree with some of my other arguments (and still say that there is no controversial idea in this). I must say that this is one of the rare instances of a thorough and thoughtful peer review I received. I don’t agree with all that the reviewers say, which is fine. But I certainly have much to learn from what they say and this is not a very common experience. Out of the nearly 100 peer reviewed papers I published (which means those many acceptances along with a greater number of rejections) between 20-30 times I thought I received comments that would really improve the quality and rigor of the paper. This rejection is certainly one of them. This means that at least some times peer reviews rally help. The percentage in my experience was about 10 %.
Whether now I would communicate the paper to some other peer reviewed journal or not, I haven’t decided. But I am not too keen for obvious reasons. If everyone agrees that peer reviews are weird and flawed, why should we consider only peer reviewed publications as science? Peer reviews actually have no relevance to science. No doubt they have a relevance to making a career in science because there is a ritual of listing and counting peer reviewed papers. Every selection, appointment, promotion etc has to go through this ritual. Now I am not in the race of making a bright career. So I suffer no loss by getting my papers rejected.
But a curious observer in me is not dead. It won’t be until I remain cognitively healthy (by medical definitions). So I have a number of questions. If the flawedness of the peer review system is universally accepted and there is no controversy about it, why do we still depend so much on it? If the main pillar of science, that is publication of the outcome, is so flawed why we fail to see that it makes the entire field of science flawed? Why the attempts to change the system are so half hearted, ephemeral and almost always a failure until now? While new fields like behaviour based policy making are thriving, why don’t we apply them to science publishing? I did my own behavioural analysis of different players in scientific publishing and designed an alternative system. It is not necessary that everyone agrees with it. But doesn’t it deserve a debate? Shouldn’t my ideas be published in order to generate a debate? Why are people of science running away from addressing the fundamental flaws in the field?
Perhaps I know the answer. There is a in-power group which decides the protocols of science publishing as well as funding. The group that already enjoys the power does not suffer by the flaws. People who actually suffer by the unfair systems have no say in changing the system. This is a vicious cycle and the powerful people of science are either dumb enough not to see it or they actually want the flaws to perpetuate in order to retain their power. I am open to both the possibilities. If there is a third one that you can think of kindly let me know.
Here are the links where you can access my original manuscript along with one of the reviewers who has directly commented on it.