conditions in which the repair shop process is recurrent (null\positive) or transient

96 Views Asked by At

here's the Story:

Let $\epsilon_1.\epsilon_2,... $ be i.i.d numbers of machines for repair to the repair shop on mornings of days $1, 2,...$ . Assume that the shop is capable of repairing exactly K machine per day. Let $X_0$ be a (random) number of machines before repair in the end of day 0, which we assume to be independent of $\epsilon_l$-s. Define $X_n$ as the number of machines before repair in the end of day n. Then, For any n

$X_n+1 = max(X_n+\epsilon_{n+1}-K,0)$

I am trying to find conditions on $E[\epsilon]=\mu $ in which this process is transient, null recurrent and positively recurrent.

Edit:

Work had been done until now.

I already proved that for $\mu\leq K$ the process is recurrent (I proved by the sufficient condition to recurrence) now I should prove when it is positive recurrence (I guess I should some how use sufficient criterion for positive recurrence)

Thanks

2

There are 2 best solutions below

2
On

Your recurrence for $X_{n+1}$ is a bit off; it should be $$X_{n+1} = (X_n-1)^+ + \varepsilon_{n+1}\tag 1 $$ (where $a^+:=\max\{a,0\}$). Let $a_k=\mathbb P(\varepsilon_1=k)$, then the transition probabilities are given by \begin{align} P_{ij} &= \mathbb P(X_{n+1}=j\mid X_n=i)\\ &= \mathbb P((i-1)^++\varepsilon_1=j)\\ &= \mathbb P(\varepsilon_1 = j - (i-1)^+)\\ &= a_{j-(i-1)^+}. \end{align} Let $\pi$ be the stationary distribution of $\{X_n\}$, assuming one exists. Let \begin{align} g_{\varepsilon_1}(s) := \mathbb E[s^{\varepsilon_1}]=\sum_{n=0}^\infty a_n s^n,\\ g_{\pi}(s) := \mathbb E[s^{\pi}] = \sum_{n=0}^\infty \pi_n s^n \end{align} be the generating functions of $\varepsilon_1$ and $\pi$. By a somewhat tedious computation detailed in Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues (2008) by Pierre Brémaud (Example 5.5), we may show that $\mathbb E[\varepsilon_1]<1$ is a necessary and sufficient condition for $\pi$ to exist, and further that $$g_\pi(s) = (1-\mathbb E[\varepsilon_1])\frac{(s-1)g_{\varepsilon_1}(s)}{s - g_{\varepsilon_1}(s)}. $$

1
On

Note that Math1000 observes that exact expressions for moment generating functions of the steady state queue process are available in the special case $K=1$ (which is what your original question used) and putting the $\epsilon_n$ outside the max. You can also get similar expressions for exact mean and (I think) moment generating functions when the $\epsilon_n$ is inside the max.

Let's now assume the iteration $X_{n+1} = \max[X_n+ \epsilon_n - K,0]$, where $K$ is a positive integer and $\{\epsilon_n\}_{n=0}^{\infty}$ are i.i.d. non-negative integer random variables.


Drift theorems for discrete time Markov chains

Define $d_k = E[X_{n+1}-X_n|X_n=k]$. This is the "drift" in state $k$ and can be used to determine positive recurrence via standard drift theorems. For example, look at the "Pak Stability Lemma" (which is a sufficient condition for positive recurrence) and "Instability Lemma" in Appendix 3A.5 in Chapter 3 of the Bertsekas and Gallager Data Networks book, available online here:

http://web.mit.edu/dimitrib/www/datanets.html

Note that there are different forms of drift theorems that are used in different books. (There are likely drift theorems given in the book mentioned by Math1000, and likely in your book too, if you are using a book). You mentioned a "sufficient condition for recurrence" but did not say what that was, perhaps it is similar to the Pak Lemma in the above reference.

Using Law of Large Numbers (LLN)

Your state space is $\{0, 1, 2, ...\}$. You want to know about recurrence to the state 0. You have: $$ X_{n+1} = \max[X_n + \epsilon_n - K, 0] \geq X_n + \epsilon_n - K $$ Define $Y_n = \epsilon_n - K$. Hence, $\{Y_n\}_{n=0}^{\infty}$ are independent and identically distributed, and $$ X_{n+1} - X_n \geq Y_n $$ Summing over $n \in \{0, 1, 2, ..., N-1\}$ gives: $$ X_N - X_0 \geq \sum_{n=0}^{N-1} Y_n$$ Thus: $$ \frac{X_N-X_0}{N} \geq \frac{1}{N}\sum_{n=0}^{N-1}Y_n $$ Taking a limit as $N\rightarrow \infty$ and using LLN gives: $$ \liminf_{N\rightarrow\infty} \left[\frac{X_N-X_0}{N}\right] \geq E[Y] \quad \mbox{with prob 1} $$ If you know $E[Y]>0$ then the number in the queue goes to infinity with prob 1 (so it only returns to 0 a finite number of times).

Central limit theorem and null recurrence (Suppose $E[Y]=0$)

Notice that the equation $X_{n+1}=X_n+ \epsilon_n - K$ holds whenever $X_n \geq K$, and this equation is the same as $X_{n+1} = X_n + Y_n$ (for $Y_n = \epsilon_n-K$). Suppose $X_0\geq K$ and you want to know if we ever return to the states $X_n \in \{0, ..., K-1\}$. Suppose $E[Y]=0$. You can use a central limit theorem argument in this case to analyze $\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1} Y_n$ to understand if $\sum_{n=0}^{N-1}Y_n$ can get arbitarily large or small.

Another way to treat the case $E[Y]=0$

Again define $Y_n = \epsilon_n - K$. Notice that $X_{n+1} \geq X_n + Y_n + R_n$ where $$ R_n = \left\{ \begin{array}{ll} 1 &\mbox{ if $X_n=0$ and $\epsilon_n<K$} \\ 0 & \mbox{ otherwise} \end{array} \right.$$ Assume that $E[Y]=0$ and $Var(Y)>0$. Then $P[\epsilon_n<K]>0$. Summing the inequality $X_{n+1}-X_n \geq Y_n + R_n$ over $n \in \{0, ..., N-1\}$ gives: $$ \frac{X_N-X_0}{N} \geq \frac{1}{N}\sum_{n=0}^{N-1}Y_n + \frac{1}{N}\sum_{n=0}^{N-1} R_n $$ Taking a limit and using the LLN with $E[Y]=0$ gives, with prob 1: \begin{align} \liminf_{N\rightarrow\infty} \left[\frac{X_N-X_0}{N}\right] &\geq 0 + \liminf_{N\rightarrow\infty} \frac{1}{N}\sum_{n=0}^{N-1}R_n\\ &= P[\epsilon_n<K]\liminf_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N-1}1\{X_n=0\} \end{align} It follows that if there is a positive fraction of time being in state 0, then $X_N\rightarrow\infty$, meaning the fraction of time being in state 0 is 0 (a contradiction). So the fraction of time being in state 0 must converge to 0 with prob 1.