If $\kappa$ is a Markov kernel symmetric wrt $\mu$ and $f\in L^2(\mu)$ with $\int f\:{\rm d}\mu=0$ and $\kappa f=f$, then $f=0$

141 Views Asked by At

Let $(E,\mathcal E,\mu)$ be a probability space and $\kappa$ be a Markov kernel on $(E,\mathcal E)$ such that $\mu$ is reversible with respect to $\kappa$. Regard $\kappa$ as a self-adjoint contraction on $L^2(\mu)$, $$\kappa f:=\int\kappa(\;\cdot\;,{\rm d}y)f(y)\;\;\;\text{for }f\in L^2(\mu).$$

I've read that we eliminate the eiganvalue $1$ from the spectrum $\sigma(\kappa)$ of $\kappa$ by forming the restriction $\kappa_0$ of $\kappa$ to $$L^2_0(\mu):=\left\{f\in L^2(\mu):\int f\:{\rm d}\mu=0\right\}.$$ How can we verify this?

Clearly, $$\mathbb R^\perp=L^2_0(\mu)\tag1$$ (i.e. the orthogonal of the constant functions in $L^2(\mu)$ is $L^2_0(\mu)$).

Now, $1$ is an eigenvalue if and only if there is a $f\in L^2(\mu)\setminus\{0\}$ with $(1-\kappa f)=0$, i.e. $\kappa f=f$. On the other hand, we easily see that $\kappa c=c$ for each constant $c\in\mathbb R$ ... So, $$\mathbb R\subseteq\mathcal N(1-\kappa),$$ but it seems like we need to show the other inclusion as well. How can we do that?

1

There are 1 best solutions below

3
On BEST ANSWER

The formal question that you asked seems to be: "why is the kernel of the operator $\textrm{id}-\kappa$ equal to the set of constant functions"? And my answer to this question is that it is not, for general reversible Markov kernels - just take $\kappa=\textrm{id}$, which satisfies all your conditions, and then $L^2(\mu)$ is equal to the kernel!

To better illustrate what is going on, consider the special case when $E$ is a finite set, $\mathcal E=2^E$, and $\mu$ is the uniform distribution on $E$. Consider the $E\times E$ matrix $k_{e,e'}=(\kappa 1_e)(e')$, which uniquely determines $\kappa$ since $$ (\kappa f)(x)=\sum_{e\in E}f(e)(\kappa 1_e)(x)=\sum_{e\in E}k_{e,x}f(x). $$

Observe that in this case $L^2(\mu)=\mathbb R^E$, and $L_0^2(\mu)$ is a hyperplane orthogonal to the all-ones vector. Moreover the spectrum of $\kappa$ coincides with the eigenvalues of the matrix $k$. The orthogonal projection of $\kappa$ to $L^2_0(\mu)$ corresponds to the orthogonal projection of $k$ onto the hyperplane mentioned above. This projection can have $1$ in its spectrum, as the example I gave above $\kappa=\textrm{id}$ (which corresponds to $k$ being the identity matrix) shows. You would need some further assumption (for instance, irreducibility of the corresponding Markov chain) in order to ensure that the eigenspace of $1$ is simple (i.e. has dimension $1$) and therefore that you can "remove" the eigenvalue $1$ from the spectrum in this manner.

It may also be enlightening to consider the simple random walk on a finite graph with $C$ components. The corresponding Markov transition kernel is reducible when $C>1$, and you have the $C$ distinct stationary measures given by picking one component and considering the stationary measure on the restriction (i.e., the measure of a vertex is proportional to its degree in the selected component, and $0$ on the other components). These are $C$ linearly independent "left eigenvectors" of $1$ (in the notation of the other answer) and illustrate that the dimension of the $1$-eigenspace can be arbitrarily large, without the assumption of irreducibility.