Show $Q(\eta_n) := \mathbb P \{\exists m < \infty : \eta_m = 0 |\eta_0 = \eta_n\}$ is a martingale

87 Views Asked by At

I am trying to solve this problem

Let $\eta_n$ be a homogeneous Markov chain on the countable state space $S : =\{ 0,1,2,...\}$ and

$\mathcal F_n := \sigma(\eta_j; 0 \le j \le n), n \ge 0$ its natural filtration. For $i \in S$ denote by $Q(i)$ the probability that the Markov chain starting from site $i$ ever reaches the point $0 \in S$:

$Q(i) := \mathbb P \{\exists m < \infty : \eta_m = 0 |\eta_0 = i\}$

Prove that $Z_n := Q(\eta_n)$ is an $(\mathcal F_n)_{n\ge 0}$-martingale

I read the definition of Markov chain, transition probability and homogeneous Markov chain, apart from that I don't know much about Markov chains. Still I am not making much sense of this problem

  1. To prove the martngale property, I don't know how to manage $Q$ in $E[Q(\eta_{n+1})|\mathcal F_n)]=E[\mathbb P \{\exists m < \infty : \eta_m = 0 |\eta_0 = \eta_{n+1}\}|\mathcal F_n)]$

How am I supposed to compute the probability of a probability(That is why I need to compute the expectation I guess) or equivalently the expectation of a probability? $Q(\eta_n)$ does not seem like a random variable

Moreover I am not sure how to check the other conditions either. $Z_n$ does not seem adapted as it depends on future events, i.e. on some $m>n+1$, i.e. $\eta_m$ which is not $\mathcal F_n$-measurable. I am not sure how to check $E[|Z_n|]< \infty $ either

  1. I am not sure if this makes a lot of sense: $ Q(\eta_{n+1})= \mathbb P \{\exists m < \infty : \eta_m = 0 |\eta_0 = \eta_{n+1}\}$ then $n+1=0 $
2

There are 2 best solutions below

5
On

Probabilities of the form $P(\text{we reach state i starting from j in finite time})$ are known as hitting probabilities.

To shorten the notation, let me use $H_{j}^{i}=P(\text{we reach state i starting from j in finite time})$

First of all, adaptability is not a problem. For $\eta_{n}$, $H_{\eta_{n}}^{0}$ is a random variable depending only on $\eta_{n}$. That is, for any $\omega$, $\eta_{n}(\omega)$ is fixed and $H_{\eta_{n}(\omega)}^{0}$ is a fixed real number (a probability).

When in doubt, always try to relate it to a Simple Symmetric Random Walk scenario.

If you have trouble understanding the above point, then consider two identical copies of the markov chain $\{\eta_{n}\}$. When an $\omega$ realizes in this first/original markov chain, then you know what $\eta_{1},\eta_{2},....$ in this chain Suppose $\eta_{n}(\omega)=k$. Then you go to the other copy of the markov chain and you assign $Q(\eta_{n}(\omega))$ the value $H_{k}^{0}$, where $H_{k}^{0}$ is the probability that in this markov chain you hit $0$ in finite time starting at state $k$. Which is just a fixed real number which does not depend on what $\eta_{j},\,j\neq k$ was or is for the original markov chain.

Suppose $(p_{ij})_{i,j\in S}$ is the transition matrix.

Now suppose $\eta_{1}=k_{1},...,\eta_{n}=k_{n}$

Then $E(H_{\eta_{n+1}}^{0}|\sigma(\eta_{1},...,\eta_{n}))=E(H_{\eta_{n+1}}^{0}|\eta_{1}=k_{1},...,\eta_{n}=k_{n}))$

But what is this?

See that conditioned on $\eta_{n}=k_{n}$, $H_{\eta_{n+1}}^{0}$ takes the value $H_{j}^{0}$ if the markov chain goes from $k_{n}$ at n-th stage to $j$ at $n+1$-th stage. But what is the probability of going to $j$ from $k_{n}$?. It is precisely $p_{k_{n}j}$

So what is the conditional expectation?

It is precisely \begin{align}E(H_{\eta_{n+1}}^{0}|\eta_{1}=k_{1},...,\eta_{n}=k_{n}))&=E(H_{\eta_{n+1}}^{0}|\eta_{n}=k_{n})\\\\ &=\sum_{j\in S}p_{k_{n}j}H_{j}^{0}\end{align}

Now it can be shown for a markov chain, that $\displaystyle H_{i}^{j}=\sum_{k\in S}p_{ik}H_{k}^{j}$.

See Norris' Markov chains Theorem 1.3.2 page 13 for a proof. (I'd recommend getting a djvu version if you can for best reading)

Thus $E(H_{\eta_{n+1}}^{0}|\eta_{1}=k_{1},...,\eta_{n}=k_{n}))=H_{k_{n}}^{0}=Q(\eta_{n})$

That's it.

7
On

Notice, by the Markov structure, $$ Q(i) = \Pi(i \to 0) + \sum_{j \neq 0} \Pi(i \to j) Q(j).$$ In words, the chance to get to $0$ from $i$ can be decomposed as the chance of getting there in one step, plus the chance of getting somewhere else in $1$ step, and then getting to zero from there.

But note that $Q(0) = 1,$ since if $\eta_0 = 0,$ then there does exist an $m < \infty$ such that $\eta_m = 0$ (namely $m = 0$). This means that we can write the expression as $$ Q(i) = \sum_{j} \Pi(i \to j) Q(j).$$

(This last observation is crucial. If you were studying $R(i) := P(\exists m: 0 < m < \infty, \eta_m = 0| \eta_0 = i),$ then the martingale result may have failed for dynamics with $R(0) < 1.$ As an exercise, find a dynamics for which this $R(\eta_n)$ is not a martingale, and also show that $R(\eta_n)$ is always a supermartingale.)

Now, by definition, $$ \mathbb{E}[Q(\eta_{n+1})|\mathcal{F}_{n}] = \sum_j P(\eta_{n+1} = j|\mathcal{F}_n) Q(j) = \sum_j \Pi(\eta_n \to j) Q(j) = Q(\eta_n),$$ where I've used the Markovianity in the second equality.

(Of course, you'll also have to check adaptation and integrability, but these are trivial)