I am trying to show that $P(X_n =0\ i.o.) = 1$ and $P(X_{n+1} = 1 | X_n=0) = c > 0$ for all $n$, $P(X_n =1\ i.o.) = 1$.
Under what conditions is this statement true?
This seems intuitively true to me since $c$ is some fixed constant and so, if $X_n = 0$ happens infinitely many times, then surely $X_{n+1} =1$ must also happen infinitely many times.
Is there an easy way to make this rigorous? Normally I would want to use Borel-Cantelli for a question like this, but it seems a bit unclear here.
Here is a counter-example: Define: $$ \{X_n\}_{n=1}^{\infty} = \left\{ \begin{array}{ll} \{0 ,0, 0, 0, 0, 0, ...\} &\mbox{ with prob $1/3$} \\ \{0, 1, 0, 1, 0, 1, ...\} & \mbox{ with prob $1/3$}\\ \{1, 0, 1, 0, 1, 0, ...\} & \mbox{ with prob $1/3$} \end{array} \right.$$ Regardless of the choice, there are (surely) an infinite number of zeros. Also, for every positive integer $n$ we have $$ P[X_{n+1}=1|X_n=0] = 1/2$$ But, with probability $1/3$, the sequence is the all-zero sequence.
It would be true if we have $P[X_n=0 \: i.o.]=1$ and if there is a $c>0$ such that \begin{align} &P[X_{2}=1|X_1=0]\\ &P[X_{n+1}=1|X_n=0, X_1, ..., X_{n-1}]=c \quad \forall n \in \{2, 3, 4, ...\} \end{align}
This system is kind of like a 2-state Markov chain with states $0$ and $1$ and: $$ P_{01}=c, P_{00}=1-c$$ But if we are in state $1$ we do not have clear transition probabilities, all we know is that, with probability 1, we will eventually get back to state 0. So then we can take an "embedded chain" by skipping those steps until we get back to 0, so the embedded chain is a true 2-state discrete time homogeneous Markov chain with: $$ P_{00}=1-c, P_{01}=c, P_{10}=1, P_{11}=0$$