Understanding the rate of infection in SIR models

98 Views Asked by At

I'm having a difficult time understanding the intuition behind the rate that susceptible individuals become infected in SIR-type models.

The infection rate (number of susceptibles infected per time period) is often written as $a c S I$, where $c$ is the probability that susceptibles ($S$) interact with infected individuals ($I$) per infected individual per day, and $a$ is the probability that an interaction results in infection.

However, consider $a = 0.9$ and $c = 0.1$, and a population of $900$ susceptible people and $100$ infected people. Then, the number of people infected per day is $a c S I = 0.9 \cdot 0.1 \cdot 900 \cdot 100 = 8100$, which doesn't make sense (to me) because the number of infected is much larger than the number of susceptible people. I realise that in this case, the probability of contact per infected individual per day ($a$) is unrealistically high, but even if $a$ was smaller, and the populations were bigger, then we could run into problems.

I originally thought that this problem was answered by continuous time models, where the interactions take place in a tiny interval of time, but even discrete-time SIR models include this type of interaction for the infection rate.

Can anyone help me understand why the $a c S I$ formulation makes sense enough for it to be used?

1

There are 1 best solutions below

2
On

This is the difference between models using population density and such using population counts. What you have in mind is the density equation $$ \dot s = -acs\imath $$ where $s=\frac SN$ and $\imath=\frac IN$ are densities. If you insert that you get for the equation of the counts $$ \dot S = -ac\frac{SI}N. $$