$P$ is the transition probability of a finite, irreducible and reversible Markov chain (i.e. a walk on a finite network$G$). $\pi$ is the stationary distribution.For $y \in G$, $\tau_y$ is hitting time of $y$, $\tau_y^+$ is first hitting time after time $1$.
Is it true that $\pi(x) P_x ( \tau_y < \tau_x^+) = \pi(y) P_y( \tau_x < \tau_y^+)$?
This seems to be a necessary fact in an exercise I'm working on. (But perhaps it is false?)
I tried writing out:
$\pi(x) P_x( \tau_y < \tau_x^+) = \sum_w \pi(x) P(x,w) P_w( \tau_y < \tau_x) = \sum_w \pi(w) P(w,x) P_w( \tau_y < \tau_x) $, since I thought that fact that $P_w(\tau_y < \tau_x)$ is harmonic in w away from $\{x,y\}$ could be useful, but this doesn't seem to be helpful.
A hint would be really appreciated!
Note first that since the probability measures start at $x$ and $y$ on the left and right hand sides, respectively, we can replace $\tau$ by $\tau^+$.
Now observe that if the Markov chain is reversible, then a positive weight function $w: \mathcal{X} \to \mathbb{R}^+$ satisfying the detailed balance conditions, i.e. \begin{align*} w(x) p(x,y) = w(y) p(y,x) \end{align*} exists. Since $\mathcal{X}$ is finite, $\sum_{x \in \mathcal{X}} w(x) \doteq A < \infty$, and since any scalar multiple of a weight function is also a weight function, we must have that $\pi = w/A$ is a stationary distribution (sum both sides of the detailed balance equation over $x$). Since the chain is irreducible, and a stationary distribution exists, we must have $\pi(x) > 0$ for all $x \in \mathcal{X}$ (Exercise!). Hence the chain is recurrent (which implies that stationary measures are unique up to constant multiples).
One way to get the result is to view the Markov Chain as an electrical network with conductance function $C(x) = w(x)$, and then invoke the result \begin{align*} P_x(\tau^+_y < \tau^+_x) = 1/(C(x)R_{\text{eff}}), \end{align*} where $R_{\text{eff}}$ is the effective resistance between $x$ and $y$. Then it follows immediately from dividing this by the analogous equation for $y$ that \begin{align*} \frac{P_x(\tau^+_y < \tau_x^+)}{P_y(\tau^+_x < \tau_y^+)} = \frac{C(y)}{C(x)} = \frac{\pi(y)}{\pi(x)}, \end{align*} as desired.
To obtain the result without using electrical network theory, let $x$ be a recurrent state (all states are recurrent by the above discussion) and define the measure \begin{align*} \mu_x(y) \doteq E_{x}\sum_{n=0}^{\tau_x^+ - 1} \textbf{1}_{\{X_n = y\}}, \end{align*} i.e. the expected number of visits to $y$, starting at $x$, before hitting $x$. This defines a stationary measure, and is thus a scalar multiple of $\pi$ (stationary measures are unique up to constant multiples for irreducible, recurrent chains). Note that by definition, $\mu_x(x) = 1$. Hence, \begin{align*} \frac{\pi(y)}{\pi(x)} = \frac{\mu_x(y)}{\mu_x(x)} = \mu_x(y). \end{align*} To compute this expectation, denote $N_x(y) \doteq \sum_{n=0}^{\tau_x^+ -1}\textbf{1}_{\{X_n = y\}}$, so it suffices to show that \begin{align*} E_x(N_x(y)) = \frac{P_x(\tau_y^+ < \tau_x^+)}{P_y(\tau_x^+ < \tau_y^+)}. \end{align*} Indeed, \begin{align*} E_x(N_x(y)) &= E_x(N_x(y) \textbf{1}_{\{\tau_y^+ < \tau_x^+\}} + N_x(y) \textbf{1}_{\{\tau_y^+ > \tau_x^+\}}) \\ &= E_x(N_x(y) \textbf{1}_{\{\tau_y^+ < \tau_x^+\}}) \\ &= E_x(N_x(y) \mid \tau_y^+ < \tau_x^+) P_x(\tau_y^+ < \tau_x^+), \end{align*} where the second equality follows from the fact that the number of times we visit $y$ before $x$ is $0$ on the set where we hit $x$ before $y$. Now note that conditioned on $\tau_y^+ < \tau_x^+$, $N_x(y) \in \{1,2,3,\dots\}$ is distributed as a geometric random variable with success parameter $P_y(\tau_x^+ < \tau_y^+)$ (think of breaking time up into chunks separated by visits to either $x$ or $y$. Then hitting $x$ to end the process is like flipping a coin with bias $P_y(\tau_x^+ < \tau_y^+)$). Hence the expectation is $\frac{1}{P_y(\tau_x^+ < \tau_y^+)}$, which yields \begin{align*} \frac{\pi(y)}{\pi(x)} = E_x(N_x(y)) = \frac{P_x(\tau_y^+ < \tau_x^+)}{P_y(\tau_x^+ < \tau_y^+)}, \end{align*} as desired. Notice that we didn't use the hypothesis of reversibility in the second argument, in contrast to the first, where it was used to define the conductance function for the electrical network (which made the proof much quicker).