Markov chain converges to the same steady state for different initial probability vectors.

176 Views Asked by At

I was asked to write a code to simulate the following Markov chain, and find the PMF of the random variable $X$: Stock Exchange

The code I've written for simulating the given Markov chain:

import numpy as np

POWER = 20

P = np.array(([0.3, 0.2, 0.5], [0.4, 0.3, 0.3], [0.3, 0.4, 0.3]))

p1 = np.array([1,0,0])
p2 = np.array([0,1,0])
p3 = np.array([0,0,1])


p_POWER = np.linalg.matrix_power(P,POWER)

print(p1 @ p_POWER)
print(p2 @ p_POWER)
print(p3 @ p_POWER)

The Result of the code:

[0.33035714 0.30357143 0.36607143]
[0.33035714 0.30357143 0.36607143]
[0.33035714 0.30357143 0.36607143]

I cannot understand why this is the case, my best guess is it has to do something with the graph of my Markov chain being complete, but I cannot prove it. I don't find the results intuitive, and would appreciate and help on the matter.