Professor gets wet

1.1k Views Asked by At

Problem : A professor has $N$ umbrellas . He walks to the office in the morning and walks home in the evening . If it's raining he likes to carry an umbrella and if it's fine he doesn't . Suppose that it rains on each journey with probability $p$ , independently of past weather . What's the long-run proportion of journeys on which the professor gets wet ?

Question : I want to see if my plan below is viable . Or you can share your approach .

(The problem's cited directly from exercise 1.10.2 of Markov Chains by James R. Norris .)


My interpretation : Suppose now professor leaves his home . If he sees it's raining , he will pick up an umbrella , if any , and he'll not get wet , otherwise he will . The umbrella will follow him to office , which he may not bring back . If he sees it's not raining , he'll just walk out , and he'll not get wet because I assumed it'll not rain in the middle of his journey . By journey I mean each commute between home & office , so 2 journeys every day . "The long-run proportion of journeys ..." means $\frac{\text{ no.journeys he gets wet}}{ \text{total no. of journeys } }$ when total no. of journeys tends to infinity .


My plan :

Assume $(1-p)p \neq 0 , N<\infty$ .

Let $(X_n)_{n\ge 0}$ be the number of umbrellas at his home at night , suppose $X_0 = N$ . The state space is $ \{ 0,...,N \}$ and transition probabilities $$ \left\{ \begin{array}{cc} p_{i,i+1} = p_{i,i-1} = (1-p)p , & p_{ii} = 1 - 2(1-p)p , \; i= 1,..,N-1 \\ p_{01} = p , &p_{00} = 1 - p \\ p_{N,N-1} = (1-p)p , & p_{NN} = 1 - (1-p)p \end{array}\right. $$ . This time homogeneous markov chain is irreducible and recurrent .

Let $$ Y_n = \left\{ \begin{array}{cc} 1 & \text{ if } \{\text{Professor gets wet tomorrow}\}\\ 0 & \text{otherwise} \end{array}\right. $$ where $$ \{\text{Professor gets wet tomorrow}\} = (\{ X_n = 0 \} \cap \{ \text{ rains tomorrow morning } \} )\\ \cup (\{ X_n = N \} \cap \{ \text{ doesn't rain tomorrow morning } \} \cap \{ \text{ rains tomorrow evening } \} ) $$

Note that he will only get wet on at most 1 journey each day . Let $Z_n=(X_n,Y_n)$ , the state space is $$ \{(0,1) , (N,1)\} \cup \{(0,0) , ... , (N,0) \} $$

The transition probabilities (please scroll) $$ \left\{ \begin{array}{cccc} p_{(i,0)(i+1,0)} = p_{(i,0)(i-1,0)} = (1-p)p , & p_{(i,0)(i,0)} = 1 - 2(1-p)p , \; i= 2,..,N-2 \\ p_{(1,0)(0,0)} = (1-p)^2p , & p_{(1,0)(0,1)} = (1-p)p^2 , & p_{(1,0)(2,0)} = (1-p)p , & p_{(1,0)(1,0)} = 1 - 2(1-p)p \\ p_{(N-1,0)(N,0)} = (1-p)p(p+(1-p)^2) , & p_{(N-1,0)(N,1)} = ((1-p)p)^2 , & p_{(N-1,0)(N-2,0)} = (1-p)p , & p_{(N-1,0)(N-1,0)} = 1 - 2(1-p)p \\ p_{(0,0)(0,1)} = (1-p)p , & p_{(0,0)(1,0)} = p , & p_{(0,0)(0,0)} = 1 - (1-p)p - p \\ p_{(N,0)(N,1)} = \left(1 - \frac{(1-p)p}{p+(1-p)^2}\right)(1-p)p , & p_{(N,0)(N-1,0)} = \frac{(1-p)p}{p+(1-p)^2} , & p_{(N,0)(N,0)} = 1 - \left(1 - \frac{(1-p)p}{p+(1-p)^2}\right)(1-p)p - \frac{(1-p)p}{p+(1-p)^2} \\ p_{(0,1)(0,0)} = (1-p)^2 , & p_{(0,1)(1,0)} = p , & p_{(0,1)(0,1)} = 1 - (1-p)^2 - p \\ p_{(N,1)(N,0)} = p+(1-p)^2 , & p_{(N,1)(N-1,0)} = 0 , & p_{(N,1)(N,1)} = 1 - p-(1-p)^2 \end{array}\right. $$ $(Z_n)_{n\ge 0}$ is also a time homogeneous irreducible recurrent Markov chain . Informally , this Markov chain looks like 2 triangles connected to each end of a bar .

My plan is to find the the expected return times $m_{(0,1)} , m_{(N,1)} $ to states $(0,1) , (N,1) $ resp. by solving linear equations using recurrence relation and minimization , and the desired quantity will be $\frac{1}{m_{(0,1)}} + \frac{1}{m_{(N,1)}} $ by Theorem 1.10.2 .


Theorem 1.10.2 Let $P$ be irreducible and let $\lambda$ be any distribution . If $(X_n)_{n\ge 0}$ is Markov$(\lambda,P)$ then $$ \mathbb{P}\left( \frac{V_i(n)}{n} \to \frac{1}{m_i} \text{ as } n \to \infty \right) = 1 $$ where $V_i(n) = \sum_{k=0}^{n-1} 1_{\{X_k = i\}}$ and $m_i$ is expected return time to state $i$ .

(From Markov Chains by James R. Norris)