Logic versus number: What scientists value more?

Science has a set of principles and the attempt is to achieve as many of them as possible, although not in every study all the principles can be complied to. For example, a famous Lord Kelvin quote says, “When you can measure something you are talking about and express it in numbers, you know something about it.”. At times some important factors are not objectively measurable and then one has to settle either on a qualitative analysis, or use some surrogate marker etc. This is an inevitable limitation at times but I don’t consider this as a major issue in the scientific method.

A trickier situation is when two principles are in direct conflict. If you try to achieve one, you need to compromise on the other. The interesting question is what do we do when there is such a conflict? The question is not restricted to one researcher’s decision. In today’s scientific publishing system, it should be acceptable to the reviewer. Only then you can publish. How frequently a conflict of principles can arise? I don’t know, because this question wasn’t there in my mind until recently. I have at least one example of such a conflict now, which opens up a new subtle but important question in the philosophy and methods of science.

Over the last few years I am increasingly getting interested in how common people use innate statistical analysis to make inferences from their own sampling and observations. To some extent this question is considered by Bayesian statistics, but my question is much broader and many aspects of which are not covered by Bayesian statistics. One such question we attempted answering using a small experiment is uploaded as a preprint now, but not yet published (doi: 10.20944/preprints202012.0200.v1).

Another thing I wonder about is, what do people do to make a dichotomous decision based upon their own sampling/observations? In formal statistics we have a concept of statistical significance, which is used to make a dichotomous inference by taking an arbitrary cut off level of significance. The most commonly used significance level of 0.05 is arbitrary but well accepted. I am trying to observe at what level people make an inference? For example, when do I decide that this cap is lucky for me? How many times a favorable event needs to be associated with that cap so that my mind considers that cap as lucky? What I feel currently is that we keep this level variable depending upon the cost of making a wrong inference versus the benefit of being correct. In this case, being particular about wearing a certain cap has a small cost. Being successful in an exam, for example, is a big gain. So if I make a wrong inference of a significant association or actually causation, I have little to lose. If the association/causation turns out to be true, I have a big gain. So the cut off for making this association will be kept very liberal. This is how and why our mind keeps on generating many minor superstitions. It is a very logical and clearly adaptive tendency of the mind.

My real question begins here. It is most logical to keep the significance level variable depending upon the cost and benefits of right versus wrong decisions. Then why do we have an almost universal significance cut off as 0.05? In principle, statistical theory allows you to make the cut off more stringent or liberal depending upon the context. But hardly any researcher uses this facility; 0.05 has an almost religious sanctity. Why don’t we consider the significance level a function of the cost-benefits of an inference?

I don’t have a final answer but I am tempted to believe in this. The cost benefits are seldom objectively measurable. Most often they are value laden judgments. Two persons’ value judgment need not be identical. So it is difficult to arrive at a new agreed alpha appropriate for every context. Often people might agree that going wrong will be costly in this context, but exactly how costly can only be a subjective judgment. They may not have any precise numbers to agree upon.

We tend to avoid this loss of objectivity by compromising with the logically sound concept that significance level should be a function of the cost-benefit of decision. The level 0.05 is no way more sound than any one’s cost-benefit or value judgment. But it is precise, it’s a number, it has precedence and others, particularly your reviewers are unlikely to object to it. So we prefer being precise over being logical. This is a clear case of conflict between two scientific principles and almost universally we have preferred to be precise and numerical, at the cost of being more logical. Are there more examples of such a conflict or trade-off between scientific principles? I feel we are quite likely to see more. I have absolutely no idea whether philosophers of science have elaborated on any such trade offs.

At least on this issue, I feel the innate statistics of illiterate people may be more fuzzy bit still is more sound than the scholarly statistics to be found in the big textbooks and expensive statistical software. But they are illiterate after all and are not supposed to know the scientific method!! To me this is one more example demonstrating that science has much to learn from illiterate people. They are more logical in changing the significance level by the cost-benefit judgment. Mainstream science is more stupid for insisting on precision, on numbers, on consensus and that results into a religious significance cut-off.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: