For markov transition matrix and the initial state, calculate probability to reach certain other state in k or less steps

166 Views Asked by At

So there is a vector n giving the initial state and a Markov transition probability matrix M. I know I can calculate the probability to reach state s in k steps with just n * M**k.

But what I want is to calculate the probability that I reach the state s in k steps or less. Is there any simpler method (e.g. using some tweaked matrix multiplication) than to:

  1. calculate probability p that we reach the state s in 1 one step
  2. assume we didn't reach it and therefore use 1-p to multiply the probability that we reach the state s in two steps to get probability p2 and so on up to k steps.

Is this approach actually valid?

Now the main question, supposing I need to calculate this probability for all combinations of starting states and final states (i.e. <number of states> ** 2), is there then something more simple? An operation that could be done purely on the matrix M? So that in the end, I can just multiply the final matrix (generated for k<= steps) with the input vector n to get the result for one starting state in combination with all the other states by just one multiplication. Is something like that possible?


I tried the sum approach suggested by Gregory for k=2 with numpy and python, doesn't seem to work as the result are not valid probabilites:

>>> m
array([[0.25, 0.75],
       [0.45, 0.55]])
>>> r = np.linalg.matrix_power(m, 0) + np.linalg.matrix_power(m, 1) + np.linalg.matrix_power(m, 2)
>>> r
array([[1.65, 1.35],
       [0.81, 2.19]])
>>> np.array([1,0]) @ r
array([1.65, 1.35])