I am reading a paper since afternoon and having searched extensively im not yet clear how does the author derive the equation.
The paper is [1]
Its 'Learning Random Walk Models for Inducing Word Dependency Distributions' by Christopher manning.
It talks about markov chains. They derive the stationary distribution of Random Surfer Model (i.e. PageRank). On Page 2 the author gives the following equations.
The stationary distribution of markov chain is given by : $\pi(s) = \lim_{t=0 \to \infty} P(S_t=s)$ where $S$ = set of states in MC and $S_t$ is the state at time $t$. $P(S_t=s)$ is the probability of state $s$ at time $t$.
Now, Markov chains used in Brin's paper have following transition probabilities.
$$p(S_t|S_{t-1}) = \gamma p_0(S_t) + (1-\gamma)p^{'}(S_t|S_{t-1})$$
where $p_0(S)$ is the initial distribution over states S. $\gamma$ is the probability of resetting according to initial distribution.
Now, the authors say its very easy to show that the stationary distribution of this Markov chain is.
$$\pi = \gamma \sum_{t=0 \to \infty}{(1-\gamma)^tP(S_t=s)}$$
How did they derive this ?
[1]ai.stanford.edu/~ang/papers/icml04-learnrandomwalk.pdf