Average delay in a Processor Sharing M/GI/1 queue

72 Views Asked by At

I'm following the derivation of the average delay in a Processor Sharing M/GI/1 queue from https://www.netlab.tkk.fi/opetus/s383141/kalvot/E_psjono.pdf (slide 6).

They end up with the following definition:

$$E(T) = \frac{1/\mu}{1-\rho}$$

where $\mu = C/E(X)$ is the service rate ($C$ is the capacity of the server) and $\rho$ is the parameter of the geometric queue length distribution. I assume that $\rho$ can be interpreted as $\rho = E(X) / \mu$.

Then, the above equation can be simplified to:

$$E(T) = \frac{E(X)}{C(1-E(X)^2/C)} = \frac{E(X)}{C - E(X)^2} = \frac{1}{C/E(X) - E(X)} = \frac{1}{\mu - E(X)}.$$

I am a little bit unsure of how to interpret the service time. Consider the following server: Our server has a capacity of $1$, $C=1$, and the average number of arriving jobs is $E(X) = 1$. Then, the service time is $\mu = 1$ but $E(T)$ is undefined as we divide by $0$. What does this mean?

Moreover, choosing $C = 5, E(X) = 4$ results in the negative average delay $-\frac{4}{11}$. What does this behaviour mean and how can a scenario be interpreted where $E(X) > C$?

Is there anything wrong with how I am interpreting this model? I find the behavior I described above very surprising.

EDIT: I just noticed that my choice of $\rho = E(X) / \mu = C$ seems wrong. I was assuming that one could set $\lambda = E(X)$ ($\lambda$ is the arrival rate) which appears to be wrong. How would you choose $\rho$ (and $\lambda$) for this model of a server to make sense?