A crisis for sure

Richard Horton, the chief editor of Lancet, recently wrote an editorial “Research integrity-a challenge not a crisis”. He feels that the fraudulent research is not a crisis in the field of science publication. This is of course a defensive knee jerk reaction of an editor. A letter by Grey et al saying, no, it’s a crisis followed. So, what constitutes a challenge and what is a crisis?

May I suggest a simple way to decide what a challenge and what a crisis is, using a simple metaphor? In fatal cancer, cancer cells are still in a minority. Majority of cells are normal and functional. The cancer cells can kill even when they still constitute a small percentage of the total body cells. The body has mechanisms to prevent cancer or launch an immune response. Showing that the mechanisms exist is not enough to prevent cancer, they have to work. A growing tumour means they are not. Having a continued vigil is a challenge, but when it falls short and is unable to meet the challenge, it is a crisis. In short, cancer causing mutations is a challenge, but a growing tumour is a crisis.

The parallel with research integrity is obvious. Like cancer cells, only a minority of researchers may be directly involved in fraudulent research. Also, academic systems have mechanisms to arrest misconduct. Is this sufficient to deny crisis? The critical question is whether the peer reviews, editorial distinctions, integrity committees and PubPeer like platforms really work. Their presence is not enough. Is there any way to decide whether the spirit of science is dying out or surviving despite the challenge? Multiple studies have shown the failure of peer reviews and editorial decisions. The number of retractions are growing, number of articles expressing concern about misconduct are growing. But it can still be argued that these are exceptional failures of an otherwise working system. Whether misconduct is really growing despite the measures is difficult to decide. So, by the criteria used by Horton one cannot decide whether to call it a challenge or crisis. But I have an alternative prognostic marker, easy to measure.

A simple indicator of healthy science is the readiness to respond to questions and challenges. This can be easily checked by studying how many serious questions on PubPeer like platforms evoke a response from the authors. I have not systematically looked at data, but a quick glance also reveals that most of the serious questions remain unaddressed. In the last one year I raised about a dozen queries, serious enough to challenge the conclusions of the published papers and none received any response. This must be true beyond my own experience. This the reason why Holden Thorp had to write the editorial, “breaking the silence”, in which he appealed the community to respond to any cross questions, challenges and allegations. But this obviously has failed to break the silence, and Thorp himself failed to respond to PubPeer comments on his own article, it indicates that the culture of dialogue and debate which is so central to science is on the death bed. Without an open debate, science is dead, no matter how many papers keep on flooding the journals.

So the choice of perception is simple. If you think fatal cancer is a crisis, then research integrity is a crisis too and it has to be treated at the fundamental level. If editors of big journals and elites of the field do not feel that there is a crisis, then very soon science is going to be dead although they may still drape and decorate the corpse.

Zero debate “Science”?

There have been many mega debates on various issues in science which became landmarks in the progress of science. The debate on interpretation of quantum mechanics, the Lamarckian-Darwinian evolution debate, the sociobiology debate, the correlation causation debate all have been among the celebrated ones. All of them were crucial in deciding the path towards better science, clearer concepts and passionate pursuit of truth.

Can you name any ongoing debate about any fundamental scientific issue? At least I can’t. But we hear umpteen number of fights about research misconduct, fraudulent publications, paper mills, accusations of plagiarism, demands for retraction and court cases based on any of these. Where are academia going?

I have several anecdotes, not about debates, rather about what should have been debated, but wasn’t. The biggest story is that of Type 2 diabetes. My group showed with multiple lines of evidence (including reproducibility confirmed experiments, sound mathematical models, epidemiological data, clinical trial data, exposing logical anomalies in existing theory) that the insulin resistance based theory behind type 2 diabetes was unsustainable, falsified and utterly wrong. The glucose normalization focus of treatment has failed in arresting diabetic complications because glucose is not central to diabetic pathology. Many clinical trials say this honestly. Others have claims of success which are easily turned down by simply looking at their raw data. We published this in the form of a series of peer reviewed research papers, critical reviews, many talks and conference posters, a book, many PubPeer challenges to misleading clinical trials. I am open to the possibility that I may be wrong. But someone needs to show what is wrong in our stand. What we saw instead was a complete silence. No argument, no debate, no challenge, no cross questioning. Just mum.

This is not the only story. I tried to initiate several debates on different issues, every time being careful about being sound, analytical, evidence supported, logically and mathematically rigorous. I have related some of these stories on my earlier blog posts. But nothing happened. In some cases, at a personal level the researchers whom I had directly criticized responded in personal emails. (Some, though not all, were kind enough to make me their raw data available, on analyzing which I could not support their claims.) Their responses often sounded like “explainawaytions” which did not satisfy me, but that is a minor issue. What intrigues me is that they were not ready for any open debate in the public domain. I had not tried using the PubPeer platform until 3 years ago, but of late I posted on PubPeer serious questions and issues about many publications in leading journals including Science, Nature Medicine, PLOS medicine, Lancet Diabetes and Endocrinology. The issues raised were seriously challenging the validity of their conclusions. Again, if I was wrong, they could have shown where I was wrong. If it was inadvertent errors, they could have corrected. In one of the letters to the editor I wrote, “If it is an inadvertent mistake, it would be appropriate to correct it to avoid misinterpretation by the reader. But if it is meant for intentional misleading of the reader, then it need not be corrected.” AND THEY DID NOT CORRECT!!  

I have dozens of such stories but the top ranking story is about the editors of Science. Holden thorp and Meagan Phelan wrote an editorial in Science on 13th Feb 2025 entitled, “Breaking the silence”. Wow. They were saying just what I had in mind. If anyone raises any serious issue about the science you published anywhere, you may or may not agree with the challenger but you need to respond. “Silence can be detrimental to public trust”, they said further. I could conclude from this editorial that it is not my experience alone that nobody responds. It is a very common phenomenon and that is detrimental to the spirit of science. Reading the editorial further was somewhat disappointing. I realized that they are not talking about engaging in arguments, debates and challenges; they are mainly focusing only on accusations of fraudulent science!! The mind boggling work of the science sleuths, about which I have admiration, has made an undesirable change. Now the organizations working for science integrity, with good intentions, make every retraction a news headline. The bad effect of this is that, any issue or question raised, discussion, debate or challenge is viewed as an accusation of misconduct. No, we need debates without smelling misconduct. “I think you are mistaken”, or “this analysis could have been done in a better way” is not an accusation. It is to be taken constructively. It should initiate a debate. Difference of opinion is an intrinsic part of science. But the ado about misconduct and retractions has changed the culture unfortunately. Just as the number of papers and JIF are dumb numerical additions to CVs; a PubPeer comment is a taken as equally dumbly negative “score”. If you respond to the comment, there will be more conversation on it and it will be flagged “seven PubPeer comments on this paper.” Without reading what the comments actually say, it will be taken as detrimental to reputation. This is perhaps one major reason why public debates don’t happen now.

But shouldn’t they? Can science exist without debates?

Nobody listens to me; that I can understand. But shouldn’t an editorial in Science make some difference? Over the months that followed, I had many more experiences that made me understand why nobody listens to the editors of Science. The editorial asserted clearly, “Science responds when questions are raised by the scientific community or the public about its published research papers and counsels authors and institutions to do the same, ensuring that legitimate concerns are addressed. This means being straightforward when there are problems while standing up for papers that are correct.”

On 24th June 2025 Thorp wrote another editorial to which I responded pointing out (but in somewhat sugar coated words) that your editorial is honest but is looking at the problem rather superficially. There is a need to go to the root causes of the academic problems. Science made my e-letter public but there was no response to the comments. On a subsequent editorial of 18th Sept 2025 also addressing research integrity I decided to be more straightforward and responded saying very directly that your thinking is truly very superficial. The causes behind research integrity problems are behavioural and the fundamental solution lies in redesigning academia and science publishing making them behaviourally sound. Eliminating bad incentives and ensuring that the cost-benefits of genuine science become more favorable than the cost-benefits of fraudulent science. Of late, many academic groups have been working on behaviour optimized system design. You may encourage them to design behaviour optimized academic systems and that would be a fundamental and long-term solution. I also said that Trump administration has created an opportunity to introspect. Whatever their political motives are, the opportunity to rethink is real. If this is missed, academic reforms will become impossible. This time they did not even make my letter public, forget about giving any thoughtful response. Perhaps my language was too honest, I mean too crude for Science to published. Let’s assume it was rejected for the language issue. Not for the point made.

Then came a report of a study initiated by Science editorial team itself. Jeffray Brainard wrote a story on this report which said that researchers from different countries and institutions have very different acceptance rates in Science. The most straightforward interpretation of this, along with many other well-designed studies is that this is because of peer review bias. But most editors of Science that Brainard interviewed including Holden Thorp kept on illuding peer review bias as the cause. I wrote a PubPeer comment, individually refereeing to what everyone specifically said, that they were playing ostrich. There is ample evidence that there is large peer review bias and you selectively ignore that evidence. A little prior to this incident there was another news article in Science where again the data indicated peer review bias but nobody even considered the possibility. I wrote a PubPeer comment on that too, exposing the cherry picking in both the studies and their news coverage in Science. The response to this was also complete silence.

This contrasts the editorial promise of 13th Feb 2025 that Science will be prompt in engaging with any kind of critical response. It will not avoid getting into a debate. But in reality it has also failed to break the silence. Now I know why nobody would follow the advice of science editors. They don’t really mean it. It’s only lip service.

I have a request to my readers. Can you suggest me a sophisticated and sugar-coated word for “hypocrisy”? That word is too honest for the field of academia and science publishing!!

Policy development by citizens: Contribute if you are interested

I have been advocating citizen science on the background that fraudulent and oedematous science publishing is filling up academia and in many fields better science can be done outside academia. Now I am taking a new step.

I recently had to struggle with policy and realized that policy makers also have equally failed in certain fields. I am not trained in fields such as economics, politics, law, policy or governance but I have strong common sense, some first hand understanding of reality and of human behaviour. Therefore I need to do my job. So I have written a draft policy in two fields where the existing policy has evidently failed. I have tried to do a thorough and thoughtful job from my end, but I may have failed to see some aspects. I am inviting interested citizens to contribute, by carefully reading my drafts and showing your willingness to contribute to further development. I am also specifically inviting well known experts from various field to contribute if they are open minded enough. For other interested readers, indicate in which specific issues you would like to contribute and I will contact you at the right stage. Write only your name, contact, experience, background (you need not have formal qualification in that subject) and the area where you would like to contribute. Write in the comments section or write email to vidnyanak@gmail.com. Avoid writing at length right away, that would make my job messy. It’s better I will contact you one by one and interact at length.

The two areas are

  • Human wildlife conflict management:

HWC is getting rapidly worse in India. Wildlife researchers have completely failed in even perceiving the reality, forget about suggesting realistic solutions. Farmers are suffering huge losses. Only recently one report of the impact of wildlife on agricultural economics in Maharashtra was published by the Gokhale Institute of Politics and Economics (GIPE) that is available at https://farmerandwildlife.com/ . This also has provision for comments. Alternatively access it on my link and write back on the email given above. The study estimates the net agricultural loss to Maharashtra State to be between ten thousand to forty thousand crore per year. Not stagnant at this figure but increasing every year. This is orders of magnitude greater than what wild-lifers have been imagining. For entire India it could be several lakhs of crore every year. That is the true cost of wildlife conservation with the existing policy. It’s ok if the society has to bear a cost for conserving important species, but the question is who pays the cost? It is mostly the farmers whose voices are never heard. Long term conservation is just not possible this way. So something must be done. The GIPE documents stops here where the other one begins.

The other part of this study is to suggest a comprehensive policy for conflict management, which in fact becomes a new policy for conservation itself. The draft policy that I wrote is available for study and Read it here .

This is likely to be a very controversial and sensitive issue and therefore I own the responsibility myself. It clearly advocates fundamental changes to wildlife protection laws in India. It boldly projects the reality that human-wildlife coexistence is just not possible with a complete band on hunting. The institutions and NGOs I talked to were reluctant to publish frank, bold and realistic statements because of the fear of getting into controversy. Privately majority of individuals, including many wildlife researchers and forest officials agree with me. I don’t mind being the villain in the eyes of some sectors, if need be. So read carefully and contribute if you can. Also feel free to criticize me, but not based on sentiments; based on data, study, analysis, considerations in ecology, economics, law, governance and so on.

(2) Overall farmer policy for India:

This is another sensitive issue and my views are fundamentally different that what is being talked about so far. My vision of the policy is based on the upcoming concept of behaviour optimized policy. In this concept everyone is assumed to be inherently selfish and the policy and implementation system needs to be designed in such a way that when everyone behaves with maximum selfishness the system works smoothly and achieves the intended purpose. If you assume the stake holders or regulators to be honest, the system is bound to fail. If there can be corruption, there will be corruption. Policing and punishment just doesn’t make anyone honest and prevent corruption. Instead selfishness should lead to honesty in the new concept since the system itself gives maximum returns to honesty and hard work.

Can we build a corruption free system without relying on policing and punishment? My answer is yes, and that’s what the new concept of behaviour optimized policy aims at. But agriculture is a vast field and no one can be expert in everything about it. I am confident about the foundational principles. Built on this foundation other things need to detailed out. I appeal individuals with interest, experience and expertise (optional) to contribute in the specific issues they have studied or thought of.

This is a novel attempt by itself. Developing a policy by citizens’ inputs is unprecedented in my knowledge. If there any example, I will be happy to study that and learn from it too. But let’s make a beginning. Reading this will be your beginning in the collective venture. It’s a long read, but no hurry. Take your time.

In either of the fields, I do not assume that the government will listen to us. We are not the policy makers, I know. But this situation is not different from any other field in science. Today in academia nobody reads papers. Papers are published only to add to the CV, get PhDs, jobs, promotions and positions. Since I am not in academia, I work and write only because I am interested. That applies to my policy writing as well. If I receive a positive response from people, I hope one day the policy makers will also be positive about it.

Hoping to get as much response as I can read. Feel free to use local language, use translators and post the original as well as translated versions. I understand English, Hindi, Urdu and Marathi. Responses in these languages need not be translated. Feel free to share on whichever platform you feel, I am not interested in credit, but any comments, criticism, feedback or potential contribution needs to flow back.

“My name is Khan” phenomenon in medicine:

“My name is Khan, I am not a terrorist”. Was a famous dialogue from a 2010 movie. It has a very clear political message. All (most to be precise) terrorists are Muslims, but all Muslims are not terrorists. Looking at all Muslims with suspect; treating every Muslim as if they are terrorists is ethically, legally, politically wrong and that is very clear.

But the field of medicine does that, I mean a logically equivalent blunder, and nobody says it is wrong there!! I want to point this out only as a logical problem, with no political intentions. Just as some Muslims happen to be terrorists, some of the type 2 diabetics develop heart, kidney, brain related complications; certainly not all. But we still treat diabetics as if all of them are bound to develop these and insist on treating them. This is similar to what China is believed to be doing with Uighur Muslims. The china act came under heavy criticism a few years ago. (I don’t claim to know the reality and wonder why they have suddenly stopped talking about it now!). Are the two logically different? If one is unethical how is the other one ethical?

Perhaps diabetic medicine wants to treat everyone to be on the safer side and that should be good. Not treating them would perhaps be inviting trouble for them. So not treating them should be unethical isn’t it? This is far from reality. Putting together data from dozens of clinical trials and carefully analyzing it shows that glucose lowering treatment of diabetes as being practiced hardly prevents any of the complications (https://www.qeios.com/read/IH7KEP , https://doi.org/10.1002/14651858.CD015849.pub2 ). A number of trials claim so but a look at their raw data is sufficient to know that they have really tortured the data to come at the pre-determined conclusion. In many large scale trials, the treated group had significantly higher mortality than the controls. Many trials did not find any difference at all. If we cherry pick only the most “successful” trials, we find only 1 or a few percent absolute difference in the incidence of complications. There are many clear inferences from this. Even without any treatment, only a small percentage of diabetics develop complications over a span of decades. Treatment at the most makes a marginal difference. So how is this different from the “My name is Khan” (MNIK) phenomenon? There too only a small proportion of the community turn terrorists and huge investment in anti-terrorist squads is unable to prevent it entirely.

Treatment might be justified by saying that, “but we don’t know who is going to get complications. So it is good to treat everyone.” Then how is it different from suspecting every Muslim to be a terrorist? There also you do not have any a priori knowledge.

Moreover, it is not true that we cannot predict who will get complications. Data clearly show that in all classes of HbA1c, those who are physically fit are unlikely to develop complications (https://pmc.ncbi.nlm.nih.gov/articles/PMC6908414/ ). Physical fitness prevents many types of complications independent of weight loss or glucose control. The odds ratios for mortality across HbA1c categories varies between 1.1 to 2 in different studies whereas the odds ratios across fitness categories can even exceed 10 (https://pubmed.ncbi.nlm.nih.gov/40569873/ ). So physical fitness is much more important than glucose control. This means that even among the different glucose classes it is possible to judge who are more likely to develop complications and who are not. Then why treat everyone with high blood sugar?

But what is wrong in treating everyone? The answer depends upon the cost benefits of the treatment. The new generation drugs, mainly GLP-1RA drugs really cost a fortune. Apart from that there are psychological costs. An impression is created in the public in such a way that being irregular in taking medicine gives a guilt complex quite unnecessarily. But even more important and less well known is that under certain contexts the drugs are dangerous. In particular stringent sugar control in some trials resulted in greater mortality than control (https://pubmed.ncbi.nlm.nih.gov/18539917/ ; https://www.nejm.org/doi/full/10.1056/NEJMoa0810625 ; https://pubpeer.com/publications/417DE03905005C28E226F823C2AF63 ). Why do we still insist that everyone needs to be treated?

The answer is very clear to me. The difference between why we don’t treat every Muslim as terrorist is that so many of them are intricate part of the social economic machine. They are at responsible positions, often doing good jobs and not easily replaceable. Wherever communities are intricately linked and networked in daily functions and economics, it is beneficial for the society and for the state not to isolate any community for any reason. Perhaps Israel thinks they can do without the Palestinian community and so its behaviour is different. Ultimately what is beneficial to a state or a society in a given context at a given time decides what it politically considers ethical. Similarly in medicine the benefit matters. Treating everyone with ineffective drugs for the lifetime is beneficial for the pharma companies, so it is recommended and considered ethical. Ultimately cost benefit calculations matter. Everyone cares about selfish benefits, but sometime it is possible to fool others and that is the main use of ethics as commonly practiced. Both doctors and patients are fooled into believing that not treating a diabetic is unethical. This does not mean that selflessness or truly ethical behaviour does not or cannot exist. It does, but always in a minority. More commonly the rules of ethics are decided by the benefit of someone who is successful in fooling others to a large extent. As long as people including the practicing physicians are fools, the MNIK phenomenon will continue to exist in medicine.

Twenty five years of interest in type 2 diabetes

My curious interest in type 2 diabetes was triggered by some discussions in Katta by mid 2000 if I remember correctly. I looked at it as an evolutionary biologist and tried to interpret it afresh. Over the years my own entirely novel interpretation developed which I published in a series of over 25 research papers and two books. This journey changed my life, my science as well as my perspective and relationship with academia. But did it change the mainstream research in the field? The answer is a big no. It remains equally confused, laden with serious anomalies in the underlying theories, unscientific beliefs, data manipulations and purposeful misleading of doctors and patients. The entire field is pseudoscience and nobody cares.

As soon as I started browsing through literature, I was struck by the horrendous anomalies in the field, reproducible experiments having proved beyond doubt that the foundational beliefs were wrong, simple mathematical models showing the impossibility of the prevalent theory and on top of it the complete failure to cure diabetes and/or prevent diabetic complications effectively and consistently.  

The anomalies that were already there were the following. Inducing insulin resistance by gene knock outs does not result into consequences that the theory expects. Neither insulin nor glucose levels go out of range by the induced insulin resistance. Suppressing insulin production or release in an insulin resistance state does not increase fasting glucose in rodent as well as human experiments. During development of T2D, hyperinsulinemia does not seem to be a response to insulin resistance as the theory says. Hyperinsulinemia precedes insulin resistance and experimentally suppressing hyperinsulinemia brings down insulin resistance and glucose remains normal. All these experiments have been reproducible and are more than sufficient to conclusively show that the prevalent theory is absolutely wrong.  All this has been very much there is mainstream literature, high impact journals.

Then there are anomalies that I pointed out. The clinical definition of insulin resistance is circular and non-falsifiable. The compensatory fasting hyperinsulinemia with normoglycemia is illogical and mathematically impossible. If you put together all experimentally demonstrated links to and from glucose and insulin, a network of known causal links can be constructed. Neither glucose nor insulin are central to this network and normalizing these two are unable to cure diabetes even in a theoretical model. The older evolutionary thinking that a “thrifty” tendency developed  as an evolutionary adaptation to feast and famine is neither ecologically nor physiologically sound. Humans do not show any physiological characteristics of being thrifty. Fat cells do not induce insulin resistance. In fact the most abundant signal molecule secreted by fat cells is actually insulin sensitizing by the popular definition. The diet theories including high fat, high carb, intermittent fasting, time of eating and all are full of mutually contradicting data.

On top of it no clinical trial has conclusively shown that normalizing glucose can arrest diabetic complications. Clinical trials are full of bad experimental designs and all signs of data twisting, conclusion spinning, unscientific data handling and purposeful misleading. Glucose is not central to T2D and therefore normalizing glucose is not even theoretically expected to avoid diabetic complications and reduce mortality. But this rhetoric is repeated as a religious text and all treatment focuses on reducing glucose which is not going to help anyway.

Showing all this with reproducible experiments, rigorous data analysis and sound theory and mathematical models, publishing this in any form has absolutely no effect on the religion of type 2 diabetes. There was no counterargument on what we showed and published. I gave talks in places like Joslin Diabetes Centre and OCDEM arguing that the prevalent theory has been proved wrong with multiple lines of evidence. The Joslin talk was well attended by everyone including the Director. There was no cross questioning or counterargument after my talk. More than one personal responses later were that the argument is not new. We all know the theory is wrong. Just that you openly said it, others don’t.

Not only my group pointed out the glaring anomalies, we also proposed an alternative theory which goes like this. We evolved as hunter gatherers and our physiology evolved to support the necessary behaviours of that lifestyle. Now many of the behaviours are simply missing in the modern lifestyle. These behaviours have been experimentally demonstrated to be linked with many growth factors, angiogenic factors, neurotransmitters and other signal molecules. The deficiency of these behaviours has multiple well demonstrated physiological effects. For example, altered expression of angiogenic factors because of altered behaviours leads to reduced capillary density and endothelial dysfunction. This reduced the glucose supply to the brain (again experimentally demonstrated). When the brain receives subnormal glucose it instructs the liver to synthesize more of it. That is why there is fasting hyperglycemia. This has nothing to do with insulin resistance. Vascular dysfunction is primary and glucose change is only consequential. Therefore bringing blood glucose to normal doesn’t do any good. Getting the growth factor and other signals back to normal is the solution. This is easy to do through sports because all sports is an attempt to get back the hunting fighting behaviours. Exercise is useful not because it burns calories but because it brings back some critical missing behaviours and their neuroendocrine correlated. This theory logically and mathematically explains 19 major anomalies which the classical theory was muddled with.  Getting funded for a new idea that too from a person away from the power centers of the field is impossible. We did try and failed to get funded to work further on the hypotheses. But there already existed substantial support to these ideas in literature.

I would have welcomed any criticism of my arguments.  I would have taken back my statements if they were shown to be wrong. But nobody did this. They knew any counterarguments would put them in deeper trouble. It is better to pretend that they are just not aware of any such arguments. The multiple serious anomalies just don’t exist and everyone is happy with the ongoing pseudoscience. And the field continues with the theory disproven decades ago, creating new waves by beating drums of a new drug, whose clinical trials actually show no effect. Currently with volunteer researchers I am examining the raw data of clinical trials and feel astonished at how commonly they trample all well known principles of statistics to claim support to their already decided conclusions.

In short, the entire field of type 2 diabetes is pseudoscience and all people supporting it call themselves scientists and enjoy the fat salaries, perks, prestige and positions. With this my perspective of academia changed completely. I no more look at people in academia with respect. They have sold out their souls to funders. Publication metric and funding prospects have completely overtaken basic curiosity and research integrity. I personally gained a lot from the experience so I am grateful to everyone. Understanding of science is complete only when you learn how not to do it as well. These people helped me expanding my understanding of science. I realized that just the principles of science are not enough, human behaviour is an intrinsic part of it and this behaviour is not at all different than that in wars, politics, power and business. Power is more important than fairness and truth even in the field of science. But once in a while, someone follows truth, it may be ignored for a long time. May be at times someone rediscovers it later. The history of science is full of such examples and nothing has changed.

Yes, it is intentional misleading. The authors and editors seem to agree.

When one finds problems in the statistical analysis of a paper, serious enough to invalidate the conclusions drawn, what should the reader do?

It is in the right spirit of science that you assume this might be because of oversight. To err is human and researchers are humans to begin with. If a reader notes serious problems in a paper, it is necessary to point it out with an expectation that the authors and/or editors respond. Science would welcome two types of responses. (i) They disagree with you. What you perceive as serious mistake is not really a mistake and they have a sound justification for what they did. If this is the case they need to justify sufficiently elaborately. Often the differences of opinion may not resolve. In that case both the sides need to be made transparent for the reader.  That is the responsibility of the editor. (ii) In case the authors realize that there is some problem with their analysis and inferential logic, they need to correct the analysis or clearly state the limitations of the inference by publishing a correction to the paper. Both these responses are completely in the spirit of science and should be welcome.

There are two more possibilities that are not really in the spirit of science. (iii) The third possibility is that the authors neither have a sound justification, nor the readiness to correct themselves, but the editors realize the gravity of the problem and decide to retract the paper. This happens quite often but not so easily. Retraction is treated as serious damage to the prestige and reputation by the authors, their institutes, editors and the journals. Therefore they try to avoid, postpone or cover up the problem. (iv) The forth possibility is that the conclusions drawn from the flawed analysis were published with a deliberate intention to mislead the readers. If this is so, they will certainly not publish any correction because it goes against the very purpose of publishing. They will not have any justification to what they did because it was misleading anyway.

Either for the third or the fourth reason, editors are reluctant to take any action or just keep on delaying the action until the paper is old enough and readers have lost interest in reading any correction, even if published. By the time popular science perception has accepted the misled direction and then it is tough to change it. The publication of correction is a low key event, the purpose of deliberate misleading is served until then. If the editors do not take any action the sleuths still have an option of posting the cross questions in platforms such as PubPeer. If the authors think they are not wrong, they can publish a rebuttal on PubPeer.  If they accept the problem they can publish a correction. If nothing of this happens we are left with the conclusion that the authors as well as editors deliberately intend to mislead the readers.

In certain fields of science misleading has a great benefit. The field of medicine, in particular, is prone to this because of the millions of dollars of possible profits involved. After spending huge amounts on developing a drug, if a clinical trial does not show it to be sufficiently effective, accepting the result leads to huge losses for the pharma industry. In that case making an impression that the drug is effective is necessary for the business. And it is not very difficult to mislead the medicine community, because either they do not understand the scientific method and inferential logic, or they simply do not have the time to waste in being careful about what they accept as science. People in academia certainly have the expertise but have no motivation. Their career progress depends upon how many high impact papers they themselves publish. Hardly any credit goes to exposing frauds in the field.  The peril of being busy in making a successful career is that a lot of pervert science gets published and nobody cares.

What are the most common ways employed in misleading people in the field of clinical trials? Particularly clinical trials for life-style related disorders. There seem to be a few common tricks repeatedly used.

  1. Multiple statistical comparisons without correction: Statistical inferences are probabilistic. There is always a small chance that your inference can be wrong. When you do hundreds of statistical tests in a single study, at least a few of them turn out to show “significant” results. But this could be just a result of having done a large number of tests each one with a small chance of being wrong. This is a well recognized problem in statistics. Solutions have been suggested, which are not free of problems. But not having a good solution is not a justification for hiding or disowning the problem itself. Most clinical trials in the field of lifestyle related disorders typically use two strategies to take advantage of doing multiple tests and then hide or disown the problem.
    • a. Register a large number of clinical trials. Not every trial gives results you want. Publish the ones in which you get favorable results, don’t publish those that give inconvenient results. In the field of type 2 diabetes clinical trials registered on https://clinicaltrials.gov/ , only about one third of the completed trial results have been made public. Two thirds remain silent about what they found.  
    • b. A given trial looks at a large number of outcomes, then categorize by sex, age groups, BMI groups and other possible subgroups. So the total number of tests typically performed are in hundreds and sometimes even in thousands. Then they prominently report the ones that are significant in the direction that they expect. It is almost guaranteed that a few will turn out to be significant by chance alone. This is enough to start beating drums that the drug is effective.
  2. Selective reporting and different format of reporting: Just as some of the tests turn out to be significant in the expected direction, a few are significant in the other direction. They do one of the two things when this happens. They either just don’t report the inconvenient ones or report them in a way that makes a misleading impression. For example they would use indices of relative risk reduction (such as odds ratio, hazards ratio) while reporting desirable effects of the drug; in contrast they use absolute risk reduction indices while reporting undesirable effects of the drug. Absolute risk reduction generally turns to have smaller looking numbers than relative risk reduction. So the reader is made to think that the good effects of the drug are large and bad effects small.
  3. Convenient subgroups and subtotaling: The subgroups are made by convenience and their results are selectively reported. For example they may make totals of many subcategories when that gives more expected picture, but report the subcategories separately when that gives a more convenient picture. The adverse events where they get expected results, are called “serious adverse events”, the ones in which results go in the other direction are called non-serious adverse events. This classification is not accompanied by any clear definition of the word “serious”. There are no standard guidelines on how to report statistics and in effect they take a systematically misleading path.

These are common tricks used for painting a picture that the drug is effective. Then they concoct more ways to mislead in a context specific manner. Very intelligently twisting data they reach conclusions that have been already pre-decided.  Sometimes, raw data are available or are made available on request. I analyzed the raw data in many such cases and my analysis did not support their conclusions at all. The published conclusions could not be reached without cherry picking or twisting the data in some way or the other, and there are so many ways in which data can be twisted.

I am giving below the details of three clinical trials published in very reputed journals where we detected serious flaws in the published analysis, we wrote to authors and editors and ultimately made our concerns public on PubPeer. To begin with we were open to all the four possibilities. The authors could have counter-argued to show that they were not wrong; they could have published the corrected versions; if authors did neither, editors could have retracted the papers. But none of these things happened.  Both authors and editors kept mum and did nothing except some promises that they will consider the cross questions seriously. From the response (or lack of it) of authors and editors we could differentiate between the above four possibilities.  After careful scrutiny in all the three cases we are compelled draw a conclusion that both authors and editors in all these cases clearly intend to mislead people. Perhaps this is quite representative of clinical trials. We have more reasons to believe that a significant proportion of clinical trials have been systematically fooling people all the time.

The first case is that of a paper in Lancet Diabetes and Endocrinology. Our pubpeer comment on it and the prior correspondence with the editors is at these two links.

The Pubpeer comments were published in August 2024. The authors had ample time to respond, but they didn’t.

The second case is about a paper in PLOS Medicine. This paper was clearly hiding inconvenient results although the raw data could reveal them. We wrote to the editors in Aug 2024 who initially responded positively and asked clarifications from the authors. We have no idea whether the authors responded because the editors suddenly stopped all correspondence. Ultimately we published our comments on pubPeer in February 2025 to which the authors have not responded. Links to the correspondence with PLOS medicine editors and the pubpeer comments respectively are here.

Recently with some colleagues we started looking at the GLP-1RA trials about which there is much brouhaha. They are being projected as wonder drugs effective against anything that you name on earth. Looking at raw data shows that these clinical trials suffer the same set of problems which make their inferences invalid.  On one of the papers published in Nature medicine and another on NEJM both from the SELECT trial we wrote comments on PubPeer in February 2025 and the authors haven’t responded as yet.

Interestingly to the letter in response to PLOS Medicine paper and the PubPeer comments on the Nature medicine paper after requesting correction we wrote, “However, corrections need not be made if the misleading is intentional since the purpose is served.”  Both the authors did not make any correction, nor counter-argued in defense of their analysis in any form. This is a clear admission that the misleading was intentional. Since the editors also did nothing, it is clear that even editors of such prestigious journals are interested in deliberately misleading people.

Perhaps intentional misleading is common across clinical trials. Earlier I published my analysis of multiple clinical trials related to type 2 diabetes. There were many responses to this article, all in the public domain, mostly supporting our arguments. Not a single response was from authors of the original papers (https://www.qeios.com/read/IH7KEP ).

If anyone has any doubt about intentional misleading, please contact the authors of these papers or editors of the respective journals for confirmation.

April 11, 2025

health, news, politics, politics, research, science

Bad science soaring high

Vulture populations have declined now but bad science has taken their place in soaring high up.

Science is supposed to have a set of principles and researchers are expected to follow them. Further science is said to be self correcting and open minded debates are important for self correction to happen. The human mind has multiple inherent biases and scientists are not exceptions. Therefore conscious efforts are needed to come over the biases and follow the principles of science. Often it is difficult to ensure this for an individual and others opinion helps. But much science being published today does not seem to believe in this. On top of the inherent biases there are bad incentives created by publish or perish pressure, bibliometric indices, rankings and the like. For several reasons peer reviews fail to give the expected corrective inputs and often they actually add to the biases. An open peer review system, open debate is the only thing that I can see that might minimize, if not eliminate bad science.

There is a vulture related story of bad science soaring high, but before describing that let me illustrate the background with some anecdotes. Using bad statistics, flawed logic, badly designed experiments, inappropriate analysis, cooked up data are just too common in many papers in high impact journals coming from elite institutions. Because of the high prestige of the journals and institutions, bad science coming from there enjoys impunity. Any cross questions or challenges are suppressed by all means. Here are a few of my own experiences.

In 2010 a paper appeared in Current Biology from a Harvard group. A little later the corresponding author emailed me asking my opinion because the paper contradicted our earlier PNAS paper. I was happy because adversarial collaboration, (if not collaboration, at least a dialogue) is good for science. So a series of email exchanges began. At some stage I said I would like to have a look at your raw data. They had no problem in sharing it. It was huge and they had used a program written specifically to analyze it. We started looking at it manually although it appeared to be an impossible task. But very soon we discovered that there were too many flaws in the data itself. The program was apparently faulty and was picking up wrong signals and therefore reaching wrong conclusions. We raised a number of questions and asked the Harvard group for explanations. At this stage they suddenly stopped all communication. They had taken initiative in starting a dialogue, but when their own flaws started getting surfaced, they stopped all communication. We got no explanation for the apparent irregularities in the data. Interested readers will find the original email exchanges here (https://drive.google.com/file/d/164Jo15ydGgmCL4XvAwvivpnYtjoAagMQ/view?usp=drive_link ). At that time Pubpeer and other platforms for raising such issues were not established. Retraction was very rare and we did not think of these possibilities. We didn’t pursue the case with the journal for retraction. The obviously flawed paper remains unquestioned till today.

In 2017 a paper appeared in Science claiming that cancer is purely bad luck, implying that nothing can be done to prevent it. This came from a celebrity in cancer research, but had a very stupid fundamental mistake. The backbone of their argument was a correlation across different tissues between log number of stem cell divisions and log incidence of cancer. They said a linear relationship between the two indicates that only probability of mutation matters. The problem with the argument is that a linear regression on a log-log plot means linear relationship only if the slope is close to 1. Their slope was far away and therefore it actually showed a non-linear relationship but they continued to argue pretending that there was a linear relationship. Later using the same data set we showed that cancers are not mutation limited but selection limited, an inference diametrically opposite to theirs (https://www.nature.com/articles/s41598-020-61046-7 ). But we had hard time publishing this because we were directly challenging a giant in the field.

A long standing belief is that in type 2 diabetes controlling blood sugar arrests diabetic complications. We do not find convincing support to this notion in any clinical trial going by their raw data. But many published papers still keep on claiming this repeatedly. For doing so they need to violate many well known principles of statistics which they do coolly and publish in journals of high repute. We challenged this collectively (https://www.qeios.com/read/IH7KEP) as well as specifically challenged two recent publications that had obviously twisted data. One of them was published in Lancet Diabetes and Endocrinology and the other in PLOS Medicine. The editors of Lancet D and E refused to publish our comments (first without giving reasons and on insisting they gave very flimsy and illogical reasons) and the other one is still hanging. We then opened a dialogue in Pubpeer to which the authors haven’t responded. The reason that the lancet D and E reviewer gave for rejecting our letter are so stupid that they cannot give the same defense on Pubpeer because it is open. In confidential peer reviews the reviews don’t have to be logical. They can easily get away with illogical statements. This is well demonstrated by this case. This entire correspondence is available here (https://drive.google.com/file/d/1XNzxif4ybJdgAQ4YmiKg_6mSqZGh2Mn1/view?usp=drive_link ).

The case of whether lockdown helped in arresting the spread of infection during the Covid 19 pandemic is funnier. Just a few months before the pandemic WHO had published a report based on multiple independent studies on to what extent closing of schools or offices, travel bans etc can arrest transmission of respiratory infections (https://www.who.int/publications/i/item/non-pharmaceutical-public-health-measuresfor-mitigating-the-risk-and-impact-of-epidemic-and-pandemic-influenza ). The report clearly concludes that such measures are ineffective and therefore are not recommended. After the pandemic began, within just a few months so many leading journals published papers showing that lockdowns are effective. The entire tide turned in just a few months. All the hurriedly published papers have several flaws which nobody pointed out because of fear of being politically incorrect. Our analysis, on the other hand indicated that lockdowns were hardly effective in arresting the transmission (https://www.currentscience.ac.in/Volumes/122/09/1081.pdf ).

Repeated waves of infectious disease are caused by new viral variants is a common belief that has never been tested by rejecting a null hypothesis that a wave and a variant arise independently and get associated by chance alone (https://milindwatve.in/2024/01/05/covid-19-pandemic-what-they-believe-and-what-data-actually-show/ ). Our ongoing attempts to reject this null hypothesis w.r.t Covid-19 data have failed so far. In the absence of rejection of an appropriate null hypothesis, repeated surges are caused by new variants arising is no more than a religious belief. But it still constitutes the mainstream thinking in the field.

I suspect this is happening rampantly in many more areas today. My sample size is small and obviously restricted to a few fields of my interest. But within that small sample I keep on coming across so many examples of bad science published in leading journals that I haven’t been able to articulate all together yet. Some of the other potential fields where statistics is being misused commonly include the clinical trials related to the new anti-obesity drugs namely the GLP-1 receptor agonists. All the flaws in diabetes clinical trials are present in these papers too. Worse is the debate on what is a good diet that can keep away diabetes, hypertension, CVD and the like. Perhaps the same is happening in many papers related to the effects of climate change, effect of social media on mental health, human-wildlife conflict and so many other sentimentally driven issues.  

Added to this series is now the paper by Frank and Sudarshan in American Economic Review (https://www.aeaweb.org/articles?id=10.1257/aer.20230016 ) which claims that the decline of vulture populations in India caused half a million excess human deaths during 2000-2005 alone. The paper was given news coverage by Science (https://www.science.org/content/article/loss-india-s-vultures-may-have-led-deaths-half-million-people ) months before its publication. Because Science covered it, it became a headline in global media. The claim has perfect headline hunting properties and even Science falls prey to this temptation. Interestingly the data based on which the claim has been made was criticized by Science itself a couple of years ago. That time for estimating Covid-19 deaths in India the death record data from India was dubbed unreliable now the same data becomes reliable when the inference can make attractive headlines. Other set of data used by the authors is equally or much more unreliable. Further the methods of analysis used are also questionable. Everything put together makes a good case for intentional misleading and all the reasons why I say this are available here (https://www.qeios.com/read/K0SBDO ).

What is more interesting is the way it happened. On realizing that the analysis in this paper has multiple questionable elements, we looked at the journal where it was intended to be published. Most journals have a section called letters or correspondence whether readers can cross-question, raise issues or comment in any other form on  a published article. American Economic Review where this paper was accepted for publication, did not have such norms. This is strange and very clearly against the spirit of science. Suppressing a debate violates the chances of science being self correcting. In the absence of a self correcting element science is no different from religion. So logically AER is not a scientific journal but a religious one, let’s accept that. Nevertheless we wrote to the editor that we want to raise certain issues. The editor advised us to correspond with the authors directly. We wrote to the authors. Unlike the Lancet D and E and other examples mentioned above, in this case the authors were responsible enough and replied promptly. There were a few exchanges at the end of which we agreed on a few things but the main difference did not get resolved. This is fair; disagreement is a normal part of science. But if a difference of opinion remains, it is necessary that arguments on both the sides should be made available to the reader.

The authors were also kind in making links to their raw data available to us. This entire correspondence is here (https://drive.google.com/file/d/1d91UzBCMAY9Q3Nu5Yc5_Iqm7ycWPmTWR/view?usp=sharing ). We analyzed their data using somewhat different but statistically sound approach and did not reach the same inference. It is normal that a given question can be addressed using more than one equally valid analytical approaches. Robust patterns give the same inference by any of the approaches. If it turns out to be significant with one approach but not by another it is an inherently fragile inference. It turned out so. We tried again with AER. They asked us to Pay 200 dollars as “submission fees” and that is the most interesting part of the story. The original peer review of the paper was obviously weak because none of the issues that we raised seem to have surfaced in that. I am sure the journal did not pay the reviewers. What we did was an independent peer review itself, and for this we were charged $ 200!! We paid in the interest of science although we knew that the journal will not publish our comments. But it was necessary to see on what basis it rejects. Apparently one reviewer seems to have commented on our letter. We had raised about 11 issues out of which the reviewer mentions only 3 and says that they are not convincing without giving reasons why they are not. No mention about the rest of the issues raised. This could be obviously because they had no answers to them. This is clearly cherry picking. Where the reviewers themselves are cherry picking, what else can we expect from published articles? For interested readers the entire correspondence related to the rejection is available here (https://drive.google.com/file/d/1mq2e8sKiYKUMNtUjQKnleaAPm3a8oDZO/view?usp=sharing ).

The authors seem to have incorporated some corrections in response to our correspondence (without acknowledging us, but that is a minor issue) but the main problem remained unresolved. Now we have independently made our comments public on a platform which is open to responses by anyone. This openness is the primary requirement of science and if that is getting blocked the entire purpose of publishing become questionable.

The importance of this incidence goes much beyond the vulture argument. It is about the way in which science is practiced in academia. Throughout all the stories mentioned above I am only talking about statistical mistakes that seriously challenge the inference. There can be many mistakes which do not change the major conclusions and I am not talking about them. I am a small man and have a very small reach. If in my casual exploration I came across so many cases of misleading science, how large the real problem would be?

Last year over 10,000 papers were retracted and 2024 is likely to have a much larger number. The most common problem detected is image manipulation and that is because a method to detect image manipulation is more or less standardized. Detecting and exposing fraud has become one of the most important activities for science. The Einstein Foundation Award going to Elisabeth Bik (https://www.einsteinfoundation.de/en/award ) for her relentless whistle blowing has endorsed this fact. Scientists exposing bad science are most valuable to science. But the number of retractions is still the tip of the iceberg and it is only for certain types of frauds. How many papers should be retracted for misleading statistics? How many for claiming causal inference without specifically having causal evidence? Nobody has any idea. A paper appeared just last week showing that as large as 30 % of papers published in Nature, Science and PNAS are based on statistical significance that is marginal and fragile (https://www.biorxiv.org/content/10.1101/2024.10.30.621016v1). Many others use unreliable data, cherry picked data, use indices or protocols inappropriate for the context or ignore violation of assumptions behind a statistical test. All added together, my wild guess is that two thirds to three forth of science published in top ranking journals (perhaps more) will turn out to be useless. I am not the first one to say this. This has been said by influential and well placed scientists multiple times (for example https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/ ).  

In spite of the dire state of affairs bad science continues to get published in big journals, coming from elite institutions. Why does this happen? I think the reason is that career and personal success overpowers the interest in science. Scientists are more interested in having high impact publications in their names rather than revealing the nature of reality, addressing interesting questions, ensuring robust and reproducible outcomes and seeking really useful solutions. Headline hunting has priority over getting true insights. This they do at any cost and by any means. Even flagship journals like Science seem to be more interested in making headlines than supporting real science.

The other question is why readers get fooled so easily? I think two factors are involved. First academics themselves have no incentive to check reproducibility, access raw data and do independent analysis etc. This type of work doesn’t give high impact prestigious publications and therefore it is a waste of time for them. Whether the quality of science is affected is no more a concern for academics. What academics will never do can be done by citizen scientists. But academic publishers do anything and everything to deter them from being vigilant. Look at the AER behaviour who charged us $ 200 in order to make a futile attempt to cross-question misleading science. Citizen scientists are by default treated badly as an inferior or untouchable caste. The individual cost benefits of initiating a healthy debate are completely discouraging. As a result science is viewed as a monopoly of academics and others are effectively kept away. The net result is that science is losing trust. I no more consider science published in big journals and coming from elite institutions as reliable without looking at raw data. But it is not possible to look at raw data every time, so I will advise my readers to simply stop believing in published science. Giving it only as much importance as to social media posts is the only option left. Academia treats peer reviewed articles as validated howsoever biased and irresponsible peer reviews may be. This is a matter of faith just as much as believing water of Ganges as pious, howsoever polluted it may be.  

I will end this article with a gratification note. I am extremely thankful to those who published bad science, especially all the examples above which I could study in details. I am primarily a science teacher and teaching the fundamental principles and methods of science is my primary job. Illustrating with real life examples is the most effective way of teaching. All the above mentioned papers have been giving me live examples of how not to do science. I share these papers to make the students realize. I hope, at least the next generation of researchers receive this training at an early stage.

“Nature” on citizen science vs the nature of citizen science:

The 3rd October 2024 issue of Nature has an editorial on citizen science (https://www.nature.com/articles/d41586-024-03182-y). It has some brilliant and successful examples of involving volunteers outside formal academia in doing exciting science. But unwritten in the article are the limits of citizen science as perceived by academia. On the other hand, I have examples which go much beyond what Nature Editors see. Whether to call them successful or not the readers can decide by the end of this article.

In all examples that the Nature Editorial describes, volunteers have been used as free or cheap skilled labor in studies that mainstream academics designed; for kind of work that needed more manual inputs; where AI was not yet reliable; hiring full time researchers was being unaffordable; they could save time and money by involving volunteers.

In contrast I have examples where citizens’ thinking has contributed to concept development; to design and conduct of experiments, where the problem identification itself is done by citizens; where novel solutions are perceived, worked out and experimentally implemented by people formally unqualified for research; where citizens have detected serious errors of academics or even exposed deliberate fraud by scientists. I would certainly say that this is far superior and the right kind of use of collective intelligence of people. What citizens can’t do is the formalism of articulating, writing and undergoing the rituals needed to publish papers where academics may need to help. But in several respects citizens are better than academics in pursuing science.

I have described in an earlier blog article the work that we did with a group of farmers during 2017-2020 (https://milindwatve.in/2020/05/19/need-to-liberate-science-my-reflections-on-the-scb-award/). This started with a problem faced by the farmer community itself, to which some of us could think of a possible solution. Thereafter farmers themselves needed to understand the concept, design a working protocol based on it, taking it to an experimental implementation stage and maintain their own data honestly. Then back to trained researchers who analyzed the data, developed the necessary mathematics etc. By the time this was done I had decided to quit academia and my other students involved in this work also had quit for different reasons. The entire team was outside academia when the major chunk of work was done and we could do it better because we were free from the institutional rituals. This piece of work received an international award ultimately. Here right from problem identification farmers, including illiterate ones, were involved in every step except the formal mathematics, statistical analysis and the publication rituals.

In January 2024, I posted on social media that anyone interested in working with me on a range of questions (including the ones they themselves have) may contact me. The response was so large that I couldn’t handle so many people. I requested that someone from the group should take the responsibility of coordinating the group so that maximum use of so many interested minds can be made. This system did not take shape as desired because of many unfortunate problems coincidently faced by all the volunteering coordinators themselves. But a few volunteers continued to work and a number of interesting themes progressed. They ranged from problems in philosophy and methods of science to identifying, studying and handling problems faced by people.

One of the major patterns in this model of citizen science involves correcting the mistakes of scientists writing in big journals, some of which we suspect were intentional misleading attempts. For example, we came across a paper in The Lancet Diabetes and Endocrinology (TLDE) which was a follow up of an interesting clinical trial in which using diet alone they had claimed substantial remission of type 2 diabetes in one year. Their definition of diabetes remission was glucose control and freedom from glucose lowering medicines. After 5 year follow up they claimed that the group under the diet intervention who achieved remission by the above definition had significantly low frequency of diabetic complications. When we looked at their raw data, it certainly did not support their conclusion. They had reached this conclusion by twisting data and cherry picking on the results. Peer reviews never look at such things if it is coming from one of the mainstream universities. This is not a baseless accusation, there is published data showing the lop-sided behaviour of peer reviewers.

The true peer reviewers need to be the readers. But in academia nobody has time to read beyond the name of the journal, title and at the most abstract. The conclusions written at the end of the abstract are taken as final by everyone, even when they are inconsistent with the data inside. This is quite common with the bigger journals of medicine. The reason academics are not interested in challenging such things is that it takes a long time and painstaking efforts by the end of which they are not going to get a good original publication. The goal of science has completely changed in academia and the individual value of publishing papers in big journals has completely replaced the value of developing insights in the field. Since anyone in academics cannot do the job of preventing misleading inferences, citizens have to do it. Citizens can do what academics can’t because number of papers and journal impact factors don’t shape their career anyway. Citizen science should focus on doing things that people in academia cannot or may not. That is the true strength of citizen science. Since people in academia seem to be least bothered about the increasing fraudulent science, citizens outside academia will have to do this.

In this case, after redoing statistical analysis ourselves, we wrote a letter to the editor of TLDE, who responded after a long time saying that the issues you raised appear to be important and she will send the letter to the authors to respond. Then nothing happened for a long time again.  On sending reminders the editor responded saying that our letter was sent to a reviewer (no mention of what the authors’ response was) and based on the reviewer’s views it was rejected. The strange thing was that the reviewer’s comments were not included in the editor’s reply. After insisting on seeing the reviewer’s comments they were made available. And amazingly (or perhaps not surprisingly) the reviewer had done even more selective cherry picking on our issues. He/she gave some sort of “explanawaytions” to some of them. For example we had raised an issue that when you do a large number of statistical tests some are bound to turn out individually significant by chance alone. Therefore just showing that you got significance in some of them is not enough. This is a well known problem in statistics and there are solutions suggested. The reviewer said something to the effect that the solutions suggested are not satisfactory for us and hence we pretend that the problem does not exist!! The reviewer completely ignored the issues for which he/she did not have any answer. So the reviewer was worse than the authors. Then we published our comments on Pubpeer (https://pubpeer.com/publications/BB3FA543038FF3DF3F83B449F8E5AA) to which the authors never responded. This entire correspondence with TLDE can be accessed here (https://drive.google.com/file/d/16zjYPeKcz0JEnlrjSXP4p1QUimdBEPFy/view?usp=sharing). The absence of author response and the fully entertaining reviewer response makes it clear that the illogical statistics was intended to mislead and not an oversight.

Two more fights are underway and I will write about them as soon as they land up here or there. Either the paper needs to be retracted/corrected or our comments published along with the paper. But this will be detrimental to the journal as well as author reputation, so it is very unlikely. A more likely response will be that they will simply reject our comments or do nothing about anything. In either case I will make the entire correspondence public. In recent years a large number of papers are being retracted (over 10,000 in 2023, perhaps much more in 2024). A large number of them are because of image manipulation. But that is because the technique of detecting image manipulation is there now. I suspect a much greater number needs to be retracted for screwing up statistics with intentional misleading, or simply to get the paper accepted. Who will expose this? In my view this is beyond the capacity and motivation of academics and therefore this should be a major objective of citizen science.

I have no doubt that many people outside academia can acquire the skill-set to do so. All that is needed is common sense about numbers. Technical knowledge about statistical tools is optional. Most of the problems in these papers were the kind of misuse of statistics that a teacher like me tells the first year students not to do. In the quality of data analysis the scientists publishing in big journals are inferior to our first year students. I have seen many more examples of this earlier.

Detecting frauds in statistics is not difficult, but the further path is. The system of science publishing has systematically made the further path difficult. In a recent case, a paper had fudged data and had reached misleading conclusions in very obvious ways. The peer reviewers should have detected it very easily, but they failed. When a group of volunteers pointed out the mistakes, reanalyzed the raw data showing that the inferences were wrong; the editors said – submit your comments through the submission process and the submission process includes a $200 submission fee. I am sure the journal did not pay the earlier reviewers anything. And when someone else did a thorough peer review, he/she is being penalized for doing a thorough job!! This is how science publishing works.

In a nut shell, many in academia are corrupt and citizen scientists are likely to do much better science. But academics know this and therefore hurdles are purposely being created so that their monopoly can be maintained. The entire system is moving towards a kind of neo-Brahmanism where common man is tactfully kept away from contributing to knowledge creation. Multiple rituals are created to keep people away effectively. The rituals in science publishing are increasing for the same purpose. I am sure this was the way brahminical domination gradually took over in India. Now the entire world of science is moving in the same direction. Confidential Peer review and authors charges are the two tools being effectively used for monopolization. There is a need that citizens become aware and prevent this right at this stage. I see tomorrow’s science safer and sound at the hands of citizens than with academia. This is the true value and true potential of citizen science. Since academia is actively engaged in suppressing this kind of citizen science, we the science loving common people need to take efforts to keep it alive.

Multiple hypocrisies of science publishing

I received a desk rejection from Science yesterday which is worded “We now receive many more interesting papers than we can publish. Therefore, our decision is not necessarily a reflection of the quality of your work but rather of our stringent space limitations.”

Nothing unusual in getting rejected as well as in the stereotyped draft of the rejection letter from Science. This is a typical rejection draft from almost all high ranking journals. What strikes me is the statement that the rejection does not reflect on the quality of work. This they say for every desk rejection. Do we take it as a true statement? If we do, then all the importance given to journal prestige, impact factors and other indices become meaningless. We believe that some journals represent high quality because their peer reviews are more rigorous and they select only high quality articles. But these journals are officially admitting that the belief is not true. They certify that acceptance-rejection is dependent on something other than quality, whatever it is.

But in academia, people only look at the name of the journal. Hardly anyone ever reads a paper to decide its quality. The name of the journal is assumed to reflect the quality of the paper. This is not only a contradiction. It is hypocrisy. And this is not the only one. Academia is founded on a series of such principles that obviously hide truth and pretend that it does not exist. Alternatively they make official statements which are obviously not true but they pretend them to be true.

The acceptance rate is often considered a marker of the prestige of a journal, but this they don’t  state openly. In official statements and published articles it is emphasized and repeatedly said that acceptance rate is not a marker of journal quality (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3526365 ). But most journals proudly make their acceptance rates public in some way or the other, which does not serve any other purpose. It is unlikely that authors who do not have high quality work are persuaded not to submit to a journal by looking at the high rejection rate. The rejection is not linked to quality by their official statement anyway! A high acceptance rate is equally unlikely to attract authors because of the impression (right or wrong) that such journals are low prestige journals. If prestige was not linked to acceptance rates and authors responded positively to high acceptance rates then there would soon be equilibrium where submissions to all journals would have been similar. This is not the case. So what purpose journals achieve by declaring their rejection rates? It is certainly being used as a marker of journal prestige. The rejection letter therefore fails on logical integrity. But does science really need logical integrity? Looking at the publication system it doesn’t seem so.

If I write in a paper that I have done experiments to arrive at such and such inference. The experiments will remain confidential, I will not declare them but you accept my inference as tested and found to be true. Which journal will accept such a paper? They expect us to give all details of experiments, data sources, analytical methods and whatever. But they say they have performed a rigorous peer review and found such and such paper of publishable quality. They will keep the peer review confidential but readers are expected to accept their decision! What can be more hypocritical than this?

The dictionary meaning of the word “peer” implies a level ground. Peer reviewer is another research worker standing at the same level as the authors. Ideally both should be able to engage in an argument on level ground. Reality is far from being so. The reviewer is the master who rarely gets challenged. Authors are often ready to modify, even spin their statements completely to “please” the reviewers. I have experienced this as a reviewer as well. Making the author cite reviewers’ references, even if irrelevant, is just too common. So certainly use of the word “peer review” is hypocrisy in itself.

The fact that peer reviews are intrinsically biased has been demonstrated with well designed randomized controlled experiments which are published in a “high ranking” journal presumably after rigorous peer review (https://www.pnas.org/doi/abs/10.1073/pnas.2205779119 ). At least those who believe that high ranking journals have more rigorous peer reviews should accept what this paper says. And this paper says that peer reviews are intrinsically biased.

Peer review has always been a belief based system. There is no evidence that every paper published in a prestigious journal is peer reviewed except for those journals that have started publishing peer reviews along with the paper. The confidential peer review system always works on a religious like belief system. You need to believe that the peer reviews select for good quality papers and you are not allowed to ask for any evidence of peer review. But the demonstration of the deep rooted biases in peer reviews has evidently shown that it is a false belief. Now the faith has become “blind faith”. It’s not only that you believe without evidence, it amounts to believing in spite of clear evidence on the contrary. Science publishing runs entirely on this blind faith, worse than a religion!!

What have journals done to recognize and minimize the biases? Most have done nothing even after the demonstration of strong biases and flaws. Some have adopted “double blind” peer reviews. Is this a good solution? Certainly not in the era of preprint publishing. If a preprint is already publicly available, the authors cannot be kept anonymous and therefore a pretense of double blind peer review is another hypocritical stance.

Open access publication is a big trend today. The true reasons for the new trend are different and the ones propagated are entirely different. Now since protecting access of online articles is becoming tougher day by day, hacking has become easy and debatable initiatives such as Sci-hub are making everything available free of cost. As a result profit from the reader side is becoming increasingly impossible. So now the new business model is to make profits from the author side. This is easier since the payment comes even before publishing. An ethical looking mask of this business model is called “open access publishing”. If making science open access was the objective then already there are many journals run by societies and academies that are free at both ends. Indian academies have a particularly successful model. This has been there for decades, but now the old wine is being distributed in a new bottle called “diamond open access” as if they have invented a new model of publishing.

But what about journals that charge on both ends? Interestingly they also glorify open access. They extort the authors exorbitantly as it is, but if the authors want their article to be open access, then they charge even more. They are perhaps the best business models but call themselves science journals and stupid scientists strive to get published in them because they are the highest impact factor journals. They are believed to have the most rigorous selection of articles. But these journals are themselves denying that there acceptance rejection reflects the quality of the paper. Researchers are keen to publish there because it funding, fame and promotions and what not. There is experimental demonstration that the details of a funding proposal are irrelevant for getting funded, only a CV with big journal publications matter. Hiding or changing the full proposal text did not affect funding decisions in the experimental test, which means funding decisions are taken without reading the proposals (https://link.springer.com/article/10.1007/s11192-024-04968-7 ), a clear demonstration that only the journal names bring funding. In spite of this, these journals, in their official letters, say rejection does not reflect on the quality of the article!

The height of hypocrisy is the new trend seen in journals including Nature. Nature is publishing articles after articles on the multiple types of misconduct in science publishing. But the tone is that other journals are doing this. There is no article challenging the illogical practice of charging on both ends which Nature does as a norm. It is still not ready for any fundamental change by itself. It simply amounts to throwing stones on others’ glasses. A good case is that of the retracted paper on superconductivity (https://www.nature.com/articles/d41586-023-03398-4). In spite of severe criticism of the peer review process, Nature did not make the peer reviews public.

One more example of hypocritical behaviour comes not from science publishing but from academia itself. Over the last couple of decades there has been substantial research on behaviour based system design from many researchers. These researchers talk about regulatory policy, governance systems and business. Interestingly none of them talk about behaviour based design of academic systems, while we know that these are deeply flawed systems. They do not want to mend their own house, but keep on supplying architectural advice to others’ houses.

My sample of academia that reflects my limited exposure might be small. But if this small sample reveals so many contradictions and illogical systems, what the total would be is left to anyone’s imagination.

On redesigning academia

I am not only a critic of academia, I have also been working constructively to design an alternative system that is based on the foundations of human behaviour. Behaviour based policy and system design in a relatively novel concept but many academics have started talking about it and certain behaviour based system designs are implemented on a pilot scale and some even in real life. Interestingly none seems to have thought about behaviour based design of academic systems. I made an attempt in a document that I have opened up for everyone here (https://drive.google.com/file/d/1G7Ugv0Wo4gONBQsoTaX_-ggUgTH4ju6A/view?usp=sharing ). This is for sharing, with or without credit. Plagiarism or any version of it is also welcome, all that matters to me is that it is shared widely and read with interest. It is not necessary to agree with everything. In fact a wide and open minded debate on every possible platform is most welcome. I only expect that the debate is not only based on opinions and anecdotes. I have tried to support my arguments with data, whenever possible. The debate also needs to go on the same lines. Fortunately today there are many published studies that are useful for this purpose. Wherever there are gaps in data, let that also surface so that someone may be stimulated to collect data and provide better evidence.

My perspective is mainly from India, so the document addresses the problems of Indian academia mainly. But most of it is applicable for any non-mainstream science country and much of it is applicable to the mainstream as well.

The document first describes the serious flaws, malpractices, misconducts, bad incentives, imbalances and unfairness in the academia as of today. Science appears to have been monopolized by a handful of power centers and its dissemination throughout the world is prevented by the design of the science support systems themselves. The ideal structure of science support systems should be such that good science can be done and published from any corner of the world. The prevalent structure of academia is far removed from this ideal. You have to be a part of the publication mafia (not my words) in order to get published in a prestigious journal. There is published evidence that in academia, most decisions are made without reading the contents of scientific papers/proposals. There is published evidence that peer reviews are inherently biased, flawed and favor the imbalance of power. This is taking the field of science rapidly away from diversity towards more of a stereotyped system and career path.

The document then tries to go to the behavioural roots of these. This is not a conspiracy. It is an effect of having a system in place that is easily drifted from the collective goal towards personal selfish goals. There is an underappreciated but clear and direct conflict between what is good for science and what is good for a successful career.

Having diagnosed the causes, the document then suggests an alternative system that is based on the principles of human behaviour. If a system is designed for some ideology and expects people to mold themselves with the ideology against their nature, the system is bound to fail. A system that eliminates or minimizes the difference between individual optimum and collective optimum is a robust system. A system that coerces individuals to accept ethical norms that conflict their personal gains is a badly designed system. A system that works smoothly towards the intended goal when every individual behaves selfishly is a well designed system. The system I suggest here would minimize, if not completely eliminate the biases, imbalances and defects and facilitate a good and equitable science culture globally.

Why did I write this, being in no delusion that it will bring about any change? To quote from the document itself, “But I cannot imagine myself not writing this when I can clearly see a flawed system, when I know I can diagnose what is wrong and can also see alternative design that is behaviourally sound and correctly incentivised. I have nothing to achieve by spending time and energy on something that will not even be noticed by the mainstream. But I made a statement earlier that there is a mindset that will study, investigate and innovate without any incentive, without any output, returns or rewards. This effort is a demonstration that yes, such a mindset exists and academia need to take efforts to select such minds rather than select “intelligence” and incentivise it with rewards for proxies of success which is bound to corrupt the entire system.” Read and debate on any platform that you like. Feel free to criticize, but only after reading it carefully.