Consider a countably infinite 1-D Markov chain with states numbered $0,1,2,\ldots$ . Given $0 \leq p \leq 1$, the transition probability from state $k$ to state $l$ is denoted $p_{k,l}$ and given by $$p_{0,1} = 1$$ $$p_{k,k+1} = p\;\;\forall\; k \geq 1$$ $$p_{k,k-1} = 1 - p = q\;\;\forall\; k \geq 1$$ with all other probabilities zero (i.e. a run-of-the-mill random walk with reflection).
I'm considering the question: When starting in state $0$, what the probability (denoted $z_0$) that the random walk will visit state $0$ at any point strictly in the future?
The literature tackles the non-reflected version of the problem in the following way: It considers the variable $p_{0,0}^{2n}$, which is the probability that the state, starting at $0$, will be $0$ again after $2n$ steps, and proves that $$z_0 = 1 \;\;\text{if}\;\;\sum_{n \geq 1} p_{0,0}^{2n} = \infty$$ $$z_0 < 1\;\;\text{otherwise}$$ and that $$p_{0,0}^{2n} \simeq \frac{(4p(1-p))^n}{\sqrt{\pi}n}$$
The conclusion is that $z_0$ is $1$ iff $p = \frac{1}{2}$ and $z_0 < 1$ otherwise. The proof is the same for the reflected version, with the exception that $z_0 = 1$ iff $p \leq \frac{1}{2}$. Everywhere I've looked, this is the canonically accepted answer, and I understand it.
However, when I first examined the problem, I took a different tack. I considered the set of variables $\{z_k\}$ where $z_k$ is the probability that the random walk will visit state $0$ at any point strictly in the future starting in state $k$. I then reasoned that for all $k \geq 2$ $$z_k = qz_{k-1} + pz_{k+1}$$ and $$z_0 = z_1 = q + pz_{2}$$
That is, intuitively, the odds of getting back to zero from state $k$ are equal to the odds of going back one step and getting back to zero from there -plus- the odds of going forward one step and getting back to zero from there.
This infinite set of equations, combined with the restriction that $0 \leq z_k \leq 1\;\forall\;k$, can be solved using conventional matrix power series. There is no way I know of to solve the equations that doesn't leave at least one parameter, which I chose to be $z_0$, and I found the set of solutions $$z_k = \frac{p - p^{1-k}q^{k}}{2p - 1}z_0 + \frac{p + p^{1-k}q^{k} - 1}{2p - 1}$$
For $p \leq \frac{1}{2}$, the condition $0 \leq z_k \leq 1$ requires $z_0 = 1$, and plugging this into the equation yields $z_k = 1\;\forall\;k$ as expected.
However, for $p > \frac{1}{2}$, the story is different. Any value of $z_0 \geq \frac{1}{2}$ satisfies all equations and all constraints. This includes $z_0 = 1$, which again yields $z_k = 1\;\forall\;k$. Indeed we can easily see why this is the case by looking at the equations and noting that they all reduce to $1 = q + p$.
This is problematic both because i) it means we can draw no conclusions about whether states in the chain are transient (versus recurrent) when $p > \frac{1}{2}$, and ii) it contradicts the accepted proof that all states are necessarily transient.
I believe the difference in the two approaches is that the former assumes the number of steps between leaving and returning must be finite (albeit unbounded), whereas the latter has no such requirement. This confounds me for two reasons:
I don't understand how the notion of the step count being infinite or finite can reliably be factored into an analysis of limiting behaviour here. To me, it's like supposing, "You have an infinite amount of time to visit states, and a state takes an infinite amount of time (on average) to revisit once you leave it. If you leave a state, will you always come back to it?" And the answer is: How on Earth should I know? It's infinity divided by infinity, which has no defined solution ipso facto. It requires some additional constraints or logic to yield a definite solution, such as an added axiom positing "The answer is just 'no'. Live with it."
The second formulation is (as far as I know) a complete representation of everything that one needs to know about an infinite Markov chain--initial state, transition probabilities, and bounds on probabilities--but is apparently incapable of answering the simple question "Will this state ever be revisited?" without the aformentioned axiom.
I don't know where the axiom comes from and I haven't been able to find any resource that explains where it originates.
Formally, I'll state the axiom this way: "In an infinite Markov chain where all states communicate, starting in state $0$ and given an infinite amount of time, it's possible (i.e. there exists a nonzero probability) that state $0$ will never, at any future step, be visited again when the expected number of steps between visits can be shown to be infinite."
My three questions are this:
Is my analysis thus far faulty in some way? Am I missing or mistaking something critical?
In the given Markov chain, can it be proved that $z_0 < 1$ if $p > \frac{1}{2}$ without assuming the axiom is true?
Is this axiom provable in terms of more fundamental axioms?
It's not an axiom, and it's not true. Simple one-dimensional random walk is recurrent (i.e. the state $0$ will almost certainly be visited again), but the expected return time is infinite.