I want to know if my reasoning here is correct, it seems simple enough but I just want clarification (I am considering the proof that if a Markov process satisfies the detailed balance condition, then it is reversible).
If $X_{t}$ is a discrete state space Markov process, let $\pi_{x}^{(n)} = P(X_{n} = x)$, $\pi$ be the stationary distribution and $P(X_{n+1}=y|X_{n}=x) = P_{xy}$.
Then,
$P(X_{n} = x |X_{n+1}=y) = \large\frac{P(X_{n+1}=y|X_{n}=x)P(X_{n}=x)}{P(X_{n+1}=y)} = \frac{P_{xy}P(X_{n}=x)}{P(X_{n+1}=y)} = \large\frac{P_{xy}\pi_{x}}{\pi_{y}}$.
It is the last equality that isn't quite so obvious to me.
Can I justify it by saying that in fact:
$\large\frac{P_{xy}P(X_{n}=x)}{P(X_{n+1}=y)} = \large\frac{P_{xy}\pi_{x}^{(n)}}{\pi_{y}^{(n+1)}}$,
but since $\pi^{(n)} \rightarrow \pi$ as $n\rightarrow\infty$ and since $P(X_{n} = x |X_{n+1}=y)$ is constant, I can let $n \rightarrow \infty$ in the above, so that
$\large\frac{P_{xy}\pi_{x}^{(n)}}{\pi_{y}^{(n+1)}} =\large\frac{P_{xy}\pi_{x}}{\pi_{y}}$?
Thanks for your insight and thoughts.
I think you are putting together two different things here; the stationary distribution, and the probability of a transition (and for that matter, that of a path).
The stationary distribution gives you the probability that your process is in a specific state at any given time, provided that your process reached a "stationary evolution". I intentionally put this between quotes, because we say "steady state", and yet we're not talking about the same "states". As a consequence of what I said, the stationary distribution should not depend on $n$ at all, it is constant. You can find this distribution typically by analyzing the spectrum of the transition matrix associated with your process.
Speaking of which; the probability of a specific transition in your chain is given by the transition matrix, which defines your process altogether. Note that I am talking about first order Markov processes here; we couldn't define higher-order processes with matrices. The first-order hypothesis, along with the steady-state and a few others (ergodicity, conservation, ..) are the premisses to the Chapman-Kolmogorov equations, which I believe are the equations you are trying to understand.
As a result of the first-order hypothesis, the probability of any path longer than two states is simply the probability of the last transition in this path. Now if you are in a situation where you can measure your state at a given time $t$, then you can compute the probability of being in each single state at past time $t-1$ or future time $t+1$, knowing that you're in a given state at time $t$. This is a sort of "conditional distribution", and you should proceed like you did, using the Bayes rule, or more simply with conservation rules dictated by the Chapman-Kolmogorov equations.
Either way, my main point is that the stationary distribution is by definition stationary, and should therefore not depend on the time. This should clarify your thoughts :)