I execute a random walk on complete graph, the L2 norm between the old and the new vectors (p(t+1)-p(t)) still decreases till the iteration x and after they go to increases, then again decreases an so one?
DOES this means something? how can I interpret that? When can I Stop the random walk in this case
I think I see your point so the theorem about random walks converging requires 1 other point (also see markov chains) that for large enough t' and t>t' there is non zero probability that starting at i we would be at j for all i,j see en.wikipedia.org/wiki/… basically do a Jordan decomposition / eigenvalue decomposition of your markov matrix (or possibly a power of such that each entry is strictly positive) then try to solve (A^l)x=px where p has absolute value 1 A^l has all positive entries and size 1 Jordan blocks (i.e it has an eigenvalue decomposition) moreover it also has row sum 1 this implies x=1 by looking at largest entry of x so A^l has only 1 eigenvalue of magnitude 1 so A has only 1 eigenvalue of multiplicity 1 (other eigenvalues strictly less than 1) and hence disappear in the limit