Let’s us do some simple maths. Readers who dislike maths can jump straightaway to the section “What does the math tell us?” Here you can grasps the inferences by common sense logic. Doing maths of course needs a set of assumptions. The assumption here is that the infection is a probabilistic process, a fair, logical and often used assumption. The control strategy followed so far is that whenever an infectious individual is found, he/she is quarantined. The contacts are traced as much as possible, tested and quarantined if positive. This practice can reduce the probability of spreading the infection from this infectious unit. For every infectious unit found, you follow the same procedure. There is some probability that some virus leaks out from the quarantine room, some contact remains untraced and so the infection spreads.

Let *p* be the probability that you successfully prevent spread from this unit. There are *n* number of such units so the probability that you successfully prevent spread from all the units is *p ^{n}*. If you achieve this, the epidemic ends there. If you don’t, the virus escaping from the

*n.(1-p)*units creates

*X*new units each. So the most probable number of new units created is

*n.(1-p).X*. If this is less then

*n*, the epidemic slowly vanishes, if greater then

*n*, it spreads.

So far it is simple. But there is one more twist to this. You have a system that would do the contact tracing. The system has a finite capacity. So if n is very large, the efficiency of contact tracing suffers a little bit. If the epidemic is spreading, *n* increases exponentially whereas the system may grow at the most linearly. So the contact tracing efficiency will almost inevitable diminish as *n* increases. As a result *p *is not independent of *n*. But we will make *p* fairly robust to changes in *n* by using an equation *p = k/(k+n)* where k is an index of the efficiency of the system. If you want to slow down further the effect of n on p, a little more fancy equation *p = k ^{m}/(k^{m}+n^{m})* can be written where greater the power, slower is the decrease in

*p*with

*n*but to suddenly drop when

*n*is close to

*k*. With this form the intuitive meaning of k is how many number of cases the system can handle without much loss of contact tracing efficiency. If

*k*is large, a substantial increase in

*n*is tolerated without proportionate reduction in the efficiency of contact tracing. But at very large

*n*the efficiency per infectious unit suffers thereby decreasing

*p.*I simulated with a starting

*n*at

*t=0*and a set of assumed

*X*and

*k*to calculate

*p*and accordingly what will be the expected new

*n*at

*t+1*. The new

*n*changes

*p*again in the next time unit and so on.

One can increase *k* by imposing strict quarantine added by general lockdown and other restrictions so as to keep *p* high. So when and with what probability can I drive the virus to extinction by these measures?

Simulations show that the success works with a threshold phenomenon. At a given *k*, you can arrest the infection effectively and drive the virus to extinction up to a threshold value of *n*. If the starting *n* is higher than the threshold, arresting the infection becomes impossible. You may still stop the infection by increasing *k*, that is trying to make the restrictions more and more rigid. But the required increase is not linear, for a desired increase in threshold *n*, *K* has to increase in a power relation. At *m=1* it increases in square proportion, the power being smaller at higher *m*.

**What does the maths tell us?**

So whether one can successfully drive a virus to extinction depends upon the parameters *X*, i.e. if a virus leaks from an infectious unit, how many new units it can create; *k*, the efficiency of quarantine and contact tracing and thirdly the starting *n*, the number of infected. The two most important inferences of the maths are (i) there is a threshold *n *above which control of the epidemic by quarantine and lockdown becomes impossible at any given context. (ii) this threshold *n * can be increased by increasing the strictness of quarantine and lockdown measures, but for which the efficiency needs to increase in square or some such power of *n.*

Now even if we admit that the first lockdown in India was not very efficiently executed for whatever reasons, we can’t say a more efficient lockdown will work better because now if the *n* has increased by 1000 fold, the efficiency will have to increase several thousand or a million fold to drive the virus to extinction.

Forget about the actual parameter values. It is necessary to appreciate at least qualitatively that the efficiency of lockdown required to drive the virus to extinction increases disproportionately more than the target *n*. In India, owing to high population density, *X *is expected to be high. If the first lockdown did not drive the virus away even at a lower *n*, now at several orders of magnitude higher *n*, it is impossible to stop the transmission. So the focus needs to shift from stopping the spread in the general population to specific care of the high risk group at both preventive and treatment level. Fortunately for us, the death rate has been coming down consistently and rapidly. Better medical facilities and better patient care can bring it down further. Disinvesting from strategies that are not going to work will allow us to invest more in the more promising strategies.