“Nature” on citizen science vs the nature of citizen science:

The 3rd October 2024 issue of Nature has an editorial on citizen science (https://www.nature.com/articles/d41586-024-03182-y). It has some brilliant and successful examples of involving volunteers outside formal academia in doing exciting science. But unwritten in the article are the limits of citizen science as perceived by academia. On the other hand, I have examples which go much beyond what Nature Editors see. Whether to call them successful or not the readers can decide by the end of this article.

In all examples that the Nature Editorial describes, volunteers have been used as free or cheap skilled labor in studies that mainstream academics designed; for kind of work that needed more manual inputs; where AI was not yet reliable; hiring full time researchers was being unaffordable; they could save time and money by involving volunteers.

In contrast I have examples where citizens’ thinking has contributed to concept development; to design and conduct of experiments, where the problem identification itself is done by citizens; where novel solutions are perceived, worked out and experimentally implemented by people formally unqualified for research; where citizens have detected serious errors of academics or even exposed deliberate fraud by scientists. I would certainly say that this is far superior and the right kind of use of collective intelligence of people. What citizens can’t do is the formalism of articulating, writing and undergoing the rituals needed to publish papers where academics may need to help. But in several respects citizens are better than academics in pursuing science.

I have described in an earlier blog article the work that we did with a group of farmers during 2017-2020 (https://milindwatve.in/2020/05/19/need-to-liberate-science-my-reflections-on-the-scb-award/). This started with a problem faced by the farmer community itself, to which some of us could think of a possible solution. Thereafter farmers themselves needed to understand the concept, design a working protocol based on it, taking it to an experimental implementation stage and maintain their own data honestly. Then back to trained researchers who analyzed the data, developed the necessary mathematics etc. By the time this was done I had decided to quit academia and my other students involved in this work also had quit for different reasons. The entire team was outside academia when the major chunk of work was done and we could do it better because we were free from the institutional rituals. This piece of work received an international award ultimately. Here right from problem identification farmers, including illiterate ones, were involved in every step except the formal mathematics, statistical analysis and the publication rituals.

In January 2024, I posted on social media that anyone interested in working with me on a range of questions (including the ones they themselves have) may contact me. The response was so large that I couldn’t handle so many people. I requested that someone from the group should take the responsibility of coordinating the group so that maximum use of so many interested minds can be made. This system did not take shape as desired because of many unfortunate problems coincidently faced by all the volunteering coordinators themselves. But a few volunteers continued to work and a number of interesting themes progressed. They ranged from problems in philosophy and methods of science to identifying, studying and handling problems faced by people.

One of the major patterns in this model of citizen science involves correcting the mistakes of scientists writing in big journals, some of which we suspect were intentional misleading attempts. For example, we came across a paper in The Lancet Diabetes and Endocrinology (TLDE) which was a follow up of an interesting clinical trial in which using diet alone they had claimed substantial remission of type 2 diabetes in one year. Their definition of diabetes remission was glucose control and freedom from glucose lowering medicines. After 5 year follow up they claimed that the group under the diet intervention who achieved remission by the above definition had significantly low frequency of diabetic complications. When we looked at their raw data, it certainly did not support their conclusion. They had reached this conclusion by twisting data and cherry picking on the results. Peer reviews never look at such things if it is coming from one of the mainstream universities. This is not a baseless accusation, there is published data showing the lop-sided behaviour of peer reviewers.

The true peer reviewers need to be the readers. But in academia nobody has time to read beyond the name of the journal, title and at the most abstract. The conclusions written at the end of the abstract are taken as final by everyone, even when they are inconsistent with the data inside. This is quite common with the bigger journals of medicine. The reason academics are not interested in challenging such things is that it takes a long time and painstaking efforts by the end of which they are not going to get a good original publication. The goal of science has completely changed in academia and the individual value of publishing papers in big journals has completely replaced the value of developing insights in the field. Since anyone in academics cannot do the job of preventing misleading inferences, citizens have to do it. Citizens can do what academics can’t because number of papers and journal impact factors don’t shape their career anyway. Citizen science should focus on doing things that people in academia cannot or may not. That is the true strength of citizen science. Since people in academia seem to be least bothered about the increasing fraudulent science, citizens outside academia will have to do this.

In this case, after redoing statistical analysis ourselves, we wrote a letter to the editor of TLDE, who responded after a long time saying that the issues you raised appear to be important and she will send the letter to the authors to respond. Then nothing happened for a long time again.  On sending reminders the editor responded saying that our letter was sent to a reviewer (no mention of what the authors’ response was) and based on the reviewer’s views it was rejected. The strange thing was that the reviewer’s comments were not included in the editor’s reply. After insisting on seeing the reviewer’s comments they were made available. And amazingly (or perhaps not surprisingly) the reviewer had done even more selective cherry picking on our issues. He/she gave some sort of “explanawaytions” to some of them. For example we had raised an issue that when you do a large number of statistical tests some are bound to turn out individually significant by chance alone. Therefore just showing that you got significance in some of them is not enough. This is a well known problem in statistics and there are solutions suggested. The reviewer said something to the effect that the solutions suggested are not satisfactory for us and hence we pretend that the problem does not exist!! The reviewer completely ignored the issues for which he/she did not have any answer. So the reviewer was worse than the authors. Then we published our comments on Pubpeer (https://pubpeer.com/publications/BB3FA543038FF3DF3F83B449F8E5AA) to which the authors never responded. This entire correspondence with TLDE can be accessed here (https://drive.google.com/file/d/16zjYPeKcz0JEnlrjSXP4p1QUimdBEPFy/view?usp=sharing). The absence of author response and the fully entertaining reviewer response makes it clear that the illogical statistics was intended to mislead and not an oversight.

Two more fights are underway and I will write about them as soon as they land up here or there. Either the paper needs to be retracted/corrected or our comments published along with the paper. But this will be detrimental to the journal as well as author reputation, so it is very unlikely. A more likely response will be that they will simply reject our comments or do nothing about anything. In either case I will make the entire correspondence public. In recent years a large number of papers are being retracted (over 10,000 in 2023, perhaps much more in 2024). A large number of them are because of image manipulation. But that is because the technique of detecting image manipulation is there now. I suspect a much greater number needs to be retracted for screwing up statistics with intentional misleading, or simply to get the paper accepted. Who will expose this? In my view this is beyond the capacity and motivation of academics and therefore this should be a major objective of citizen science.

I have no doubt that many people outside academia can acquire the skill-set to do so. All that is needed is common sense about numbers. Technical knowledge about statistical tools is optional. Most of the problems in these papers were the kind of misuse of statistics that a teacher like me tells the first year students not to do. In the quality of data analysis the scientists publishing in big journals are inferior to our first year students. I have seen many more examples of this earlier.

Detecting frauds in statistics is not difficult, but the further path is. The system of science publishing has systematically made the further path difficult. In a recent case, a paper had fudged data and had reached misleading conclusions in very obvious ways. The peer reviewers should have detected it very easily, but they failed. When a group of volunteers pointed out the mistakes, reanalyzed the raw data showing that the inferences were wrong; the editors said – submit your comments through the submission process and the submission process includes a $200 submission fee. I am sure the journal did not pay the earlier reviewers anything. And when someone else did a thorough peer review, he/she is being penalized for doing a thorough job!! This is how science publishing works.

In a nut shell, many in academia are corrupt and citizen scientists are likely to do much better science. But academics know this and therefore hurdles are purposely being created so that their monopoly can be maintained. The entire system is moving towards a kind of neo-Brahmanism where common man is tactfully kept away from contributing to knowledge creation. Multiple rituals are created to keep people away effectively. The rituals in science publishing are increasing for the same purpose. I am sure this was the way brahminical domination gradually took over in India. Now the entire world of science is moving in the same direction. Confidential Peer review and authors charges are the two tools being effectively used for monopolization. There is a need that citizens become aware and prevent this right at this stage. I see tomorrow’s science safer and sound at the hands of citizens than with academia. This is the true value and true potential of citizen science. Since academia is actively engaged in suppressing this kind of citizen science, we the science loving common people need to take efforts to keep it alive.

A solution to the reviewer problem

Not getting anyone agreeing to review is a growing concern of journal editors. Often editors have to send requests to 10-20 potential reviewers so that one or two agree; keeping the time commitment, the repeated reminders required apart. Editors therefore have a tough job. They make it easy partly by making the invitation and reminder process automated. But that doesn’t address the root cause of the problem.

We had an interesting experience over the last couple of years. For one of our papers, three journals one after the other, took substantial time each with no progress shown on the online updates. After substantial delay, (6 months in one case) they returned the MS saying that they did not find any editor/reviewer agreeing to handle/review the MS. The 6 month delay case was with PLOS One which actually has a large network of editors and reviewers from different fields. I am sure they tried their best to find an appropriate person to agree, but with no result. This is just one example. Experiences across the board show that the problem of not getting reviewers is genuine and widespread.

Then something unexpected (not unexpected for me though!!) happened to our paper. After spending almost two years, trying out many journals and not getting a single review, we decided to publish it in the open peer review journal Qeios. The journal has an AI system to send requests to reviewers in the field. In addition, any reader is welcome to post comments. All comments get publicly posted immediately. The author responses and revisions are also open in the public domain. On posting the paper on this platform something miraculous happened. Reviews started flowing in within a couple of weeks. Today after about 7 weeks, there are 16 reviews received. I just completed replying the reviewers and posting a revised MS. This response contrasts the prior experience of not getting a single review in two years in multiple journals in the prevalent system of confidential peer reviews. So the contrast is between zero reviews in two years versus 16 reviews in 7 weeks.

How does the quality of reviews compare? No comparison specific to this paper is possible because there was no review in the traditional system. A systematic comparative analysis across the two systems with sufficient sample size is not possible unless some of the traditional journals make their reviews public. The traditional journals just want to avoid getting exposed so they will never do this. So I can only talk anecdotally. In my experience with prior publications, there is no difference between the two in the average quality of reviews. On an average both are equally bad. A minority of reviews are really thoughtful, rigorous, appreciating and critical on appropriate issues and therefore useful to improve the quality of the paper. For the reviews on this paper I would say a greater proportion of reviews turned out to be insightful and useful for revision. But there were poor quality reviews as well. Here is the link to the revised paper, all the reviews and replies (https://www.qeios.com/read/GL52DB.2). Readers are welcome to make their own opinion.

Talking about  my sample size of a few hundred reviews, majority of reviews have been very superficial, saying nothing critically about the central content and making a few suggestions like improving figure 3b or making the introduction more concise. Some comments such as more clarity is needed are universally true and therefore completely useless unless some specifics are pointed out. A comment about language correction is so common. When the author names sound non-British, a suggestion to get the manuscript refined by a native English speaker has to be there.  I tried taking help from professional language service and even after that this comment had to be there. Some reviews are entirely stupid and irresponsible. Precisely this entire range is seen in the reviews received in open peer review system. But the proportion of sensible reviews appears to be slightly greater in open peer review system.

Why is it that conventional journals found it impossible to get reviewers and for the same paper open peer review system had so many reviews in a short time? I can see two distinct but interdependent reasons. One is the accept reject dichotomy. Any kind of debate, differences of opinions and suggestions are useful and important for science. But a reject decision stops this process. The accept-reject dichotomy has actually defeated the purpose of peer reviews. The Qeios system takes this dichotomy away. One of the beliefs behind confidential peer reviews has been that by being anonymous reviewers can avoid the rage from authors and the resultant personal bitterness. The scientific community actually welcomes debates of any kind. They are irked by rejection which takes away the opportunity to debate. Here the authors are always given the freedom to respond and therefore reviews do not end up irritating the authors despite critical comments. Reviewers are not afraid of spoiling the relations and a healthy debate is possible. I suspect that if the social fear is taken off, reviewers actually like to publish the comments with their identity disclosed. The second factor contributing to reviewers’ willingness is that they get a credit for their review and a feeling of participating in the refinement of the paper and thereby progress of the field. This is the true and healthy motivating factor. Other suggestions such as paying the reviewers are unlikely to have the same motivational effect. Reviewers actually seem to like the idea of their comments getting published. This I think is the reason why we received 16 reviews for a paper that did not get a single reviewer in confidential review system.

A promising inference coming from this experience is that there is a simple solution to the problem of reluctance of reviewers and that is to remove confidentiality, discourage anonymity and make the peer reviews public. If accept reject decisions are necessary at all, let a discussion between the editor and authors decide it. Reviewers need not give any accept-reject recommendation. They only write their critical views. If the reviews expose fundamental flaws in the manuscript, authors themselves would like to either remove them or withdraw. If they don’t, their reputation suffers because the flaws pointed out by the reviewers also get published along with the paper.

All this can work only with the assumption that there are readers that actually read the papers and the comments as well. About this I am not so sure or hopeful. The culture of making a judgment without reading has gripped the entire field very tightly. I can only hope that when the reviews become open, readers will stop confusing between peer review and validation. Reader will stop relying on the blind faith that reviewers have done a good job and what they read now is reliable and true. Instead, readers would use their own judgment aided by the peer comments and as a result the reading culture will have to improve. If the entire community continues to make some quick judgments only based on the name of the journal, at the most reads only the title and abstract and feels no need to read further, then only God can save science.