Power series convergence of random walk transition matrix

620 Views Asked by At

I would like to find out if

$$ \sum_{t=0}^\infty P^t = \left( I- P \right)^{-1} ~,$$

where $P = D^{-1}W ~ $ is a random walk transition matrix.

$W$ is a matrix describing weights in a graph and $D$ is the degree matrix with $ d_{ii} = \sum_{j \sim i} w_{ij}$. If I'm not mistaken, for the convergence of a power series as above, all eigenvalues of $P$ must be smaller than $1$:

$$ | \lambda_i| \lt 1 $$

However, I also read in the literature that there largest eigenvalue of P is $1$.

Can anyone help me figure out if the first equation is true, and prove it? If the information given is too general, let's assume the graph is connected.

Thank you!

2

There are 2 best solutions below

0
On BEST ANSWER

I would like to offer an alternative interpretation of why the power series above does not converge, via its relation to transience/recurrence of the random walk. After which I will give you some suggestions of how one can make sense of the equation.

Recall that the Green's function of a random walk is the matrix $G(x,y)$ defined as the expected number of visits the random walk $(X_t)_{t \geq 0}$, makes to $y$, started from $x$. This can be written as

\begin{align*} G(x,y) &= \mathbf E_x \left[ \sum_{t \geq 0} \mathbf 1\{X_t = y\} \right] \\ & = \sum_{t \geq 0}P_x[X_t = y], \end{align*} where $x,y$ are vertices, and $\mathbf P_x, \mathbf E_x$ denote the law and expectation of the random walk started from $x$. The Chapman-Kolmogorov equation ensures that

\begin{align*} \mathbf P_x[X_t = y] = P^t_{x,y}. \end{align*}

Then it follows that the Green's function is given by the power series in the left hand side of your equation

\begin{align} G(x,y) & = \sum_{t \geq 0} P^t_{x,y}. \end{align}

We can now proceed with a few observations:

  • A priori we cannot say whether $G(x,y)$ is finite or not; however, assuming it is, then by the above $G(x,y) = (I - P_{x,y})^{-1}$.

  • Whether $G(x,y)$ is finite or not is exactly the question of transience/recurrence of the random walk. A walk is recurrent at $x$ if $G(x,x) = \infty$. One can show that for a finite connected graph without absorbing states that $G(x,x) = \infty$ iff $G(y,z) = \infty$ for all vertices $y,z$.

  • We can now argue why $G(x,y) = \infty$; suppose not, then $G(x,z) < \infty$ for all $z$ (as a consequence of the previous remark). In particular this implies $P^t_{x,z} \rightarrow 0$, and so fixing $\epsilon < |G|^{-1}$, there exists $T \geq 0$ such that for all z \begin{align*} P^t_{x,z} < \epsilon, \qquad t >T. \end{align*} But then \begin{align*} \sum_{z}P^t_{x,z} < \epsilon |G| < 1, \end{align*} which contradicts that $\sum_{z}P^t_{x,z} = 1$. And so $G(x,y) = \infty$.

There are then two ways to make sense of the first equation.

First of all you could work on an infinite graph: you still need to ascertain whether or not this graph is transient or not, this question is not so easy to answer in the setting of infinite graphs. You also need to make sense of the notion of an infinite matrix, $P$.

Alternatively, you can introduce a killing rate $\kappa_i \geq 0$ at each site, and redefine the diagonal degree-matrix $D$ by $D_{i,i} = \kappa_i + \sum_{j}W_{i,j}$. So long as there is one site $i$ for which $\kappa_i > 0$, the matrix $P = D^{-1}W$ is now sub-stochastic, and corresponds to a walk which at each step might get killed (i.e. the process stops). Up until the time at which the process stops, the dynamics are exactly the same as the original process without killing. Since the lifetime of such a process is almost surely finite, the Green's function $G(x,y) < \infty$.

One final remark is that this construction via killing is the same as if you have a random walk with an absorbing vertex (i.e. one you never leave), and then look only at the transition matrix $P$ restricted to the non-absorbing states.

1
On

$1$ is an eigenvalue of multiplicity $1$ of a stochastic matrix as long as your graph edges give a path between every pair of vertices $(v,w)$,and the matrix summation will not converge if $1$ is an eigenvalue. Actually, the matrix summation wouldn't converge for any stochastic matrix. You can see this by noting that every power of a stochastic matrix is again a stochastic matrix, which means that in some column you must have at least one element that is greater than or equal to $1/n$ if your matrix is $n \times n$. Thus the series doesn't converge element-wise.