Let $(X_n)_n$ be a sequence of independent Bernoulli$(p)$ random variables, i.e., $P(X_n = 1) = p$ and $P(X_n = 0) = 1 - p$. Does $$\lim_{n \to \infty}\left|\frac{1}{n}\sum_{k = 1}^n(-1)^{\sum_{j = 1}^{k - 1}X_j}(1 - X_k)\right|$$ exist almost surely? If so, what is it?
I can see that each summand $Y_k$ is $1$, $0$, or $-1$ with probability $(1 - p) \cdot \frac{1 + (1 - 2p)^{k - 1}}{2}$, $p$, or $(1 - p) \cdot \frac{1 - (1 - 2p)^{k - 1}}{2}$, respectively. (Whether $Y_k$ is $1$ or $-1$ depends on whether the Binomial$(k - 1, p)$ random variable $\sum_{j = 1}^{k - 1} X_j$ is even or odd, the probability of which is derived here.) However, the summands are not independent since, for example, $$P(Y_{k + 1} = 1 \mid Y_k = -1) = 0 \neq P(Y_{k + 1} = 1).$$
I have no idea how to approach something like this, so any guidance would be appreciated.
As in my comments, the easiest way is to model this as a 2-state discrete time Markov chain (DTMC) with states $\{0,1\}$ being the even/odd state of $\sum_{j=1}^{k-1}X_j$, and with rewards associated with each transition.
The DTMC is irreducible and aperiodic whenever $p\in (0,1)$. Then the time average reward is $$ \lim_{n \to \infty}\frac{1}{n}\sum_{k = 1}^n(-1)^{\sum_{j = 1}^{k - 1}X_j}(1 - X_k) =\pi_0E[R_0] + \pi_1E[R_1] \quad \mbox{almost surely} $$ where $E[R_0]$ is the expected reward in one slot, given we start in state 0; $E[R_1]$ is the expected reward in one slot, given we start in state 1. The answer for time average reward is a surprisingly simple function of $p \in (0,1)$.