Convergence of a particular fixed point iteration scheme

363 Views Asked by At

Setup

I have the following non-linear system of equations: $$ \mathbf{x} P(\mathbf{x}) = 0 $$ where

  • $\mathbf{x} \in \mathbb{R}_{>0}^n$ is a probability distribution, i.e., $\sum_i x_i = 1$, and
  • $P(\mathbf{x})$ is an $n \times n$ matrix that depends on $\mathbf{x}$ (hence the non-linearity.) $P$ is the transition rate (infinitesimal generator) matrix of a Markov chain, but it is probably not important.
  • Not sure if important: $P_{ij}, i \ne j$ is a strictly convex function of $\mathbf{x}$, and $P_{ii} = - \sum_j P_{ij}$ is concave.

Furthermore I know the following things.

  • This system has a unique fixed / stationary point, and
  • The dynamical system $\mathbf{\dot{x}} = \mathbf{x} P(\mathbf{x})$ converges to this fixed point from any starting point.

Problem

I want to show that the following iterative scheme converges to the fixed point.

  1. pick any initial $\mathbf{x}_1$, e.g., $\mathbf{x}_1 = [1/n, \ldots, 1/n]$
  2. given $\mathbf{x}_k$, find $\mathbf{x}_{(k+1)}$ by setting $P_k = P(\mathbf{x_k})$ solving the linear system $\mathbf{x} P_k = 0$

Comments

I have done a lot of numerical simulations to convince myself that this iterative scheme converges, but I would like to prove it formally.

Any help is appreciated!

2

There are 2 best solutions below

3
On

I have to admit that I don't have the answer, it is just an educated guess: I just recommended to look into the proofs for uniformization, because there you also solve for the fixed-point of a dynamical system (CTMC) via linearization (DTMC). I think it might be possible to use uniformization also for non-homogeneous Markov chains, as in your case, but there you should look into the proofs.

With non-homogeneity I mean what you call non-linearity: Let $\tilde{P}$ be a function of $\mathbf{x}_{k}$, i.e., include the non-linearity in uniformization. You might need to choose the uniformization constant properly, i.e., according to the extreme cases for $\mathbf{x}_{k}$. If you can then show that none of your changes (w.r.t. standard uniformization) violates conditions made in the proofs, then you know that your iterative scheme converges to the fixed point you are interested in.

On top of that, it might be worth looking into discretization of dynamical systems...

2
On

I tried to follow @Bernhard's recommendations, and this is the progress I made so far. Uniformization in itself didn't help me much. The CTMC converges, so clearly the corresponding uniformized (discrete) chain converges as well.

Consider the uniformized (linearized) chain. Instead of making a single "small" change at each discrete step (direct consequence of the small uniformization constant), I'm letting this linearized chain run until convergence. Then, uniformize again, and repeat the process.

However, @Bernhard's answer prompted me to look into results on nonhomogeneous / nonlinear Markov chains.

  1. Nonhomogeneous Markov chains. There's a flurry of results, but unfortunately none of them seems to apply to my case. In particular, most results require asymptotical homogeneity of the transition matrix, so it doesn't advance me much (it is clear that asymptotical homogeneity is equivalent to convergence my method!) See e.g. Seneta's book, Non-negative Matrices and Markov Chains

  2. Believe it or not, but there is actually some recent theory on nonlinear Markov chains. In a few words, it's a stochastic "way of thinking" of any mapping in the probability simplex. Unfortunately, it is difficult to check the conditions for convergence, see On Ergodic Properties of Nonlinear Markov Chains

In summary: nothing conclusive so far!