Random Early Discard Markov Chain

75 Views Asked by At

I'm trying to sketch the Markov chain for a Random Early Discard queueing policy where customers arrive to the queue of infinite size according to a Poisson process with rate $\lambda$. Customers that arrive are entered into the queue with probability $\alpha_{n} = \frac{1}{n+1}$ and are blocked from the queue with rate $1-\alpha_{n}$. The service time is exponentially distributed with mean $\frac{1}{\mu}$.

I think the jumps from state 0 -> 1 -> 2 -> ... -> n -> n+1 should be $\alpha_{n}\lambda$ and arrows going to the same state, say 0 to 0, should be $(1-\alpha_{n})\lambda$, and arrows going successively backward between states would be $\mu$. Is this correct or am I misunderstanding how this works? I learned some introductory stuff about Markov chains in probability, but the arrows were always labeled as probabilities that added to 1 instead of rates of arrivals per second.

1

There are 1 best solutions below

4
On BEST ANSWER

Your thoughts are correct, although the notion of $n \to n$ transition rate is superfluous for the continuous time Markov process. One could think of a discrete time-step Markov chain, in which case the "arrows" probabilities do need to add to $1$, but for this continuous-time model, the concept of probability rates is the applicable one.

One thing to point out: Since the rates depend on the size of the queue, your usual queueing theory results do not hold. In particular, for $\lambda > \mu$ an ordinary queue ($\forall n : \alpha_n = 1$) almost surely grows to infinity; but for your example $\alpha_n$ the queue will not grow indefinitely; its asymptotic distribution for large $t$ is largest near $$ \frac{1}{n+1}\lambda = \mu \implies n = \frac{\lambda}{\mu}-1 $$ The actual asymptotic distribution is interesting and non-trivial.