Question: What is the stationary probability distribution $\pi(x)$ for the following Markov Chain?
Definition: An irreducible Markov chain with countably infinite state space is positive recurrent iff we can find a function $\pi: S \to [0,1]$ that satisfies
i.) $0 \leq \pi(x) \leq 1$ for all $x\in S$
ii.)\begin{equation*} \sum_{x \in S}\pi(x) = 1 \end{equation*} iii.)\begin{equation*} \pi(x) = \sum_{y\in S}\pi(y)p(y,x) \end{equation*} for all $x\in S$, where $p: S\times S \to [0,1]$ gives the transition probabilities for the Markov chain.
Markov Chain: Let $\{X_{n}\}_{n=0}^{\infty}$ be the Markov chain on state space $S=\mathbb{Z}^{+}\cup\{0\}=\{0,1,2,\ldots\}$ that is defined by transition probabilities \begin{equation*} \begin{cases} p(m,m+2) = p, & \text{for } m\in S \text{ with } m>0\\ p(m,m-1) = 1-p, & \text{for } m\in S \text{ with } m>0\\ p(0,2) = p, & \\ p(0,0) = 1-p, & \\ p(m_{1},m_{2}) = 0, & \text{for } m_{1},m_{2}\in S \text{ in all other cases}\\ \end{cases} \end{equation*} where $p\in(0,1)$ is a fixed parameter of the process.
We require $p\neq 0$ and $p\neq 1$ because in either of these extreme cases we would no longer be working with an irreducible Markov chain.
Source: Problem 2.4 from Introduction to Stochastic Processes, Second Edition by Gregory F. Lawler is determining when this Markov chain is transient by finding $\alpha(x)$, which I can do. This Markov chain is positive recurrent for $p<1/3$, transient for $p>1/3$, and null recurrent for $p=1/3$.
My problem comes when trying to calculate $\pi(x)$ in that I am convinced that I can find several functions $\pi(x)$ that satisfy the definition. If someone can show me why there is a unique solution for $\pi(x)$ in the specific case of $p=1/7$, which makes things nice, I would consider that explanation to be a satisfactory solution for my purposes.
The other answer gives an explicit way to find $\pi$, but I just want to reiterate the argument that specifically shows why such a $\pi$ is unique in this special case (since this was asked). Note that any stationary distribution satisfies $\pi(k) = (1-p)\pi(k+1) + p\pi(k-2)$ for $k \geq 2$ (simply by applying condition (iii) at $x=k$). So in other words the function $\pi$ satisfies the recursion $\pi(k+1) = \frac{1}{1-p} \pi(k) - \frac{p}{1-p} \pi(k-2)$, whenever $k \geq 2$. This implies that if $\pi$ exists, then $\pi$ is necessarily determined by $\pi(0), \pi(1),$ and $\pi(2)$. So it suffices to show that those three values are uniquely determined. Applying (iii) at $x=0$ gives $\pi(0) = (1-p)\pi(1)+(1-p)\pi(0)$, which shows that $\pi(1)$ is determined by $\pi(0)$. Applying (iii) at $x=1$ shows that $\pi(1) = (1-p)\pi(2)$, so that $\pi(2)$ is determined by $\pi(1)$ (and hence by $\pi(0)$). We have shown that $\pi(0)$ is therefore the only free parameter here. But the normalizing condition (ii) implies that scaling $\pi(0)$ must scale the entire sum (ii) and thus $\pi(0)$ must also be uniquely determined as well.
Using more advanced methods, you can prove the following
Theorem: If $X$ is an irreducible Markov Chain on a countable state space $S$, then stationary distributions exist iff the return time of some (and hence all) points has finite expectation. In this case the stationary distribution is unique.
The reason why irreducibility is needed here is because for a reducible chain, we can "split up" the chain into its recurrence classes, then consider invariant measures for the "restricted" chains associated to each of those classes, and even take convex combinations of those. So there can be uncountably many invariant measures for reducible chains, in general. The very extreme example of this is the constant chain $p(x,y)=1_{[x=y]}$.