The Markov chain that consists of positions and the probabilities that a certain state is reached, which depends only on the previous state of the chain. Then the initial steps of the chain called burn-in chains are not taken into account because they are not relevant and not close to the converging phase. Why is it so? How would burn-in chains influence the result or process when the next state depends only on the previous one?
MCMC method - burn-in chains
92 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Imagine this in the context of a weighted random walk on the plane, where the probability that we move closer to the origin is dependent on how far we are from it; for instance, from $\langle x,y\rangle$ with $x, y\geq 0$ we might move to $\langle x+1, y\rangle$ with probability $\frac12\cdot\frac1{x^2+2}$, to $\langle x-1, y\rangle$ with probability $\frac12\cdot\left(1-\frac1{x^2+2}\right)$, and similarly for the two moves to $\langle x, y\pm 1\rangle$ (and then extended symmetrically to negative $x$ and $y$). Here the probability that we move from $\langle x,y\rangle$ to any of the four neighbors only depends on the point where we're at, and not how we got there — but the probability that we're at that point is a sum of all the possible paths we might have taken to get to that point. The purpose of ignoring the burn-in is to move past the 'transient' portion that corresponds to, e.g., the fact that it's impossible to ever be at $\langle 5,3\rangle$ within the first six steps (starting from the origin) and get to the steady-state distribution (or a good approximation to it).
The previous step, in turn, depends on the step before, and so on. This way, although MC does not take into account the chain of all previous steps in the explicit calculation of the next step, the effect of the previous steps is implicit in the present step, which is used to build the next step.