Computing $\lim \limits_{n\to \infty}Pr(X_n=0)$ for r.v $X_n$ using matrices and Markov Chains.

122 Views Asked by At

There are three coins on the table showing "Heads". Every round, Danny comes and turns a coin upside down: the left one with probability of $1\over 2$, the middle with probability of $1\over 3$ and the right with probability of $1\over 3$. Let $X_n$ be the number of coins showing "Heads" after $n$ rounds. Find $\lim \limits_{n\to \infty}Pr(X_n=0)$.

What I did so far is drawing a graph of 8 events, and arriving at a symmetric, stochastic matrix, that is: $A=\begin{pmatrix}0&{1\over 6}&{1\over 3}&{1\over 2}&0&0&0&0\\{1\over 6}&0&0&0&{1\over 3}&{1\over 2}&0&0\\{1\over 3}&0&0&0&{1\over 6}&0&{1\over 2}&0\\{1\over 2}&0&0&0&0&{1\over 6}&{1\over 3}&0\\0&{1\over 3}&{1\over 6}&0&0&0&0&{1\over 2}\\0&{1\over 2}&0&{1\over 6}&0&0&0&{1\over 3}\\0&0&{1\over 2}&{1\over 3}&0&0&0&{1\over 6}\\0&0&0&0&{1\over 2}&{1\over 3}&{1\over 6}&0 \end{pmatrix}$

I am to compute, I guess, $\mu A^n$ where $\mu=(1,0,0,0,0,0,0,0)$, but I can't possible do this with this matrix. I thought of diagonalizing it by eigenvalues and eigenvectors, but I think it is too complex as well. Maybe I am wrong and I don't have to look for $\mu A^n$. What can I do, or what do I better do to answer this question? I would really appreciate your help.

2

There are 2 best solutions below

3
On BEST ANSWER

I do not know which state does each row and column of the Markov Chain represents.

However, you already have the transition matrix, which is irreducible and aperiodic, so you just need to solve the equation

$\pi P = \pi$

where $\pi$ is a row vector, corresponding to each row of the matrix, then

$\lim_{n\rightarrow\infty}P(X_n=0)=\pi_{TTT}$

You should take some time to read the wikipedia page on Markov chains carefully.

1
On

EDIT: found an error, fixed.

Hm, seems that I lied to you. We have 2 principal eigenvalues here, 1 and -1.

Here's the output of python (check that I'm not mistaken in the matrix):

>>> import numpy
>>> numpy.linalg.eig(numpy.array([[0, 1.0/6, 1.0/3, 1.0/2, 0, 0, 0, 0], [1.0/6, 0, 0, 0, 1.0/3, 1.0/2, 0, 0], [1.0/3, 0, 0, 0, 1.0/6, 0, 1.0/2, 0], [1.0/2, 0, 0, 0, 0, 1.0/6, 1.0/3, 0], [0, 1.0/3, 1.0/6, 0, 0, 0, 0, 1.0/2], [0, 1.0/2, 0, 1.0/6, 0, 0, 0, 1.0/3], [0, 0, 1.0/2, 1.0/3, 0, 0, 0, 1.0/6], [0, 0, 0, 0, 1.0/2, 1.0/3, 1.0/6, 0]]))

(array([ -1.00000000e+00 +0.00000000e+00j,
          1.00000000e+00 +0.00000000e+00j,
         -6.66666667e-01 +0.00000000e+00j,
          6.66666667e-01 +0.00000000e+00j,
         -3.33333333e-01 +0.00000000e+00j,
          3.33333333e-01 +0.00000000e+00j,
          7.36608398e-18 +1.52386248e-17j,
          7.36608398e-18 -1.52386248e-17j]),
array([[ 0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.46605803 -2.20397973e-18j,  0.46605803 +2.20397973e-18j],
   [-0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.05343861 -1.73015110e-01j,  0.05343861 +1.73015110e-01j],
   [-0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
     0.05343861 -1.73015110e-01j,  0.05343861 +1.73015110e-01j],
   [-0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.05343861 +1.73015110e-01j, -0.05343861 -1.73015110e-01j],
   [ 0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
     0.46605803 +0.00000000e+00j,  0.46605803 +0.00000000e+00j],
   [ 0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
    -0.46605803 -8.19250267e-18j, -0.46605803 +8.19250267e-18j],
   [ 0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
    -0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
    -0.46605803 -1.24344437e-17j, -0.46605803 +1.24344437e-17j],
   [-0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j, -0.35355339 +0.00000000e+00j,
     0.35355339 +0.00000000e+00j,  0.35355339 +0.00000000e+00j,
    -0.05343861 +1.73015110e-01j, -0.05343861 -1.73015110e-01j]]))

The first array in the output is the array of eigenvalues, the second - eigenvectors in corresponding order.

So, $A^n\cdot w=c_1 \lambda_1^n v_1+c2 \lambda_2^n v_2+...+c_8 \lambda_8^n v_8$, where $A$ is your matrix, $w$ is your initial vector of states, $v_1..v_8$ are eigenvectors, $\lambda_1 .. \lambda_8$ are eigenvalues and $c_1..c_8$ are coefficients of decomposition of your initial vector $w$ into eigenvectors. When you take eigenvalue to large powers, e.g. 1000, only eigenvalues with largest absolute value remain meaningful, all the other vanish (cause $0.99^1000<<1^1000$).

In our case only eigenvalues 1 and -1 remain. This means that any initial vector after large number of steps will be converted to main eigenvector(-s).

If I'm not mistaken, Python strangely returns eigenvectors in transposed form. So, main eigenvectors are [0.35, -0.35, -0.35, -0.35, 0.35, 0.35, 0.35, -0.35] for eigenvalue 1 and [-0.35, -0.35, -0.35, -0.35, -0.35, -0.35, -0.35, -0.35] for -1.

We also need to find coefficients of decomposition $c_1$ and $c_2$ corresponding to eigenvalues 1 and -1 and after that I expect them to nullify each other's last coordinate.

Ok, I found the quotients of decomposition $c_1..c_8$. They are

>>> numpy.linalg.solve(eigenvectors, [1,0,0,0,0,0,0,0])
array([ 0.35355339 +3.11217606e-17j, -0.35355339 -6.72744505e-17j,
   -0.35355339 -1.64734712e-17j, -0.35355339 +9.12252483e-17j,
    0.35355339 +7.19520689e-18j,  0.35355339 -6.92044883e-18j,
    0.35355339 -2.28058319e-17j, -0.35355339 -1.17581670e-17j])

So, our eigenvalues 1 and -1 have inverse quotients 0.35 and -0.35.

Thus, probability of all-tails at infinity jumps between 0 for even number of tosses and 0.35*0.7 for odd.