Exercise: Let $\xi_1, \xi_2, \ldots$ be i.i.d. $\in\{-1,1\}$, taking each value with probability $1 / 2$. Let $S_0=0, S_n=\xi_1+\cdots > \xi_n$ and $X_n= \max \left\{ S_m: 0 \leq m \leq n \right\}$. Show that $X_n$ is not a Markov chain.
Durrett's Solution: $X_n$ is not a Markov chain since $X_{n+1}=X_n+1$ with probability $1 / 2$ when $X_n=S_n$ and with probability $0$ when $X_n>S_n$
I am a bit distressed about these problems in Durrett, as someone coming from a couple courses in measure theory, the solutions do not feel rigorous to me, and I am not convinced that this is a valid proof.
In order for $X_n$ to not be a Markov Chain, it must be that the law conditioned on the past equals the law conditioned on the present. Therefore, we must have some dependency on the past for this to not be a Markov Chain.
Let $\mathcal{F}_n$ be the natural filtration of the $\{\xi_n: n>0\}$, which represents the information we know at each timestep. Then I somehow need to show that we cannot write $P(X_{n+1}\in B| \mathcal{F_n}) = p(X_n,B)$ for any $p$ How does Durrett's solution show this?
If it is a Markov chain then $P(X_{n+1}=i_{n+1}|X_1=i_1,X_2=i_2,\cdots, X_n=i_n)$ depends only on $i_n$ and $i_{n+1}$ (and not on $i_1,i_2,\cdots,i_n$). You can easily find examples where $i_1+i_2+\cdots+i_n=i_n$ and $i_1+i_2+\cdots+i_n<i_n$ (fixing $i_n$ and $i_{n+1}$). It follows that $P(X_{n+1}=i_{n+1}|X_1=i_1,X_2=i_2,\cdots, X_n=i_n)$ takes the values $\frac 1 2 $ as well as $0$, leading to a contradiction.