Let $(X_n)_{n\geq0}$ be a Markov chain in $ \mathbb{N}$ with following transition probabilities: $$P(k, k+1) = p_k = 1-P(k, k-1), k \geq 1, p_k \in (0,1) $$ $$P(0,1) = p_0 := 1$$ Let $q_k = 1-p_k$. For which functions $f : \mathbb{N} \rightarrow \mathbb{R}$ the process $(f(X_n))_{n\geq0}$ is a martingale (for natural filtration)?
My reasoning: knowing $X_{n-1}$ we can move either to the right or to the left: $$\mathbb{E}[f(X_n)|\mathcal{F}_{n-1}]=f(X_{n-1}+1)p_{n-1}+f(X_{n-1}-1)(1-p_{n-1})$$ At the same time we want $(f(X_n))_{n\geq0}$ to be a martingale:$$\mathbb{E}[f(X_n)|\mathcal{F}_{n-1}] = f(X_{n-1})$$ Thus we want to solve a functional equation: $\forall k \in \mathbb{N} , \forall p \in (0,1)$ $$f(k+1)p+f(k-1)(1-p)=f(k)$$ Am I right? If yes, it can be solved as $$f(k)=C_1+C_2(\frac{1-p}{p})^k, p\neq\frac{1}{2}$$ $$f(k) = C_1+C_2k, p=1/2$$ And we don't have any conditions to find $C_1, C_2$. So what's the real solution?
If we're talking about the solution, then your work is correct (except for the expression after "we can move either to the right or left", where the $p_{n-1}$ should be $p_{X_n-1}$ on each occasion : you corrected this in a comment) up till the functional equation.
Then your solution seems to take $p_k = p$ for all $k$. This is perhaps the most natural generalization.
Otherwise, from $f(i+1) =\frac{f(i) - f(i-1)q_i}{p_i}$, we can naturally build towards a solution in the following way (that also brings in linear algebra). We let $v_n = [f(n),f(n+1)]$ (a row vector) and recognize that $$ v_{n+1}^T = \begin{pmatrix}0 & 1 \\ -\frac {q_{n+1}}{p_{n+1}} &\frac 1{p_{n+1}}\end{pmatrix}v_{n}^T $$
which gives by recurrence, the following :$$ v_{n+1}^T = \left[\prod_{i=0}^n \begin{pmatrix}0 & 1 \\ -\frac {q_{i+1}}{p_{i+1}} &\frac 1{p_{i+1}}\end{pmatrix} \right] v_0^T $$
for all $n \geq 0$. An expression may be written for $n \leq 0$ in a very similar fashion.
As each of the matrices used in the expansion are invertible, this tells us that the map from $v_n^T$ to $v_{n+1}^T$ is a bijective map. Therefore, every $v_{n+1}$ is determined by $v_0$. However, because the map is a bijection, the reverse is also true : any single $v_i$ is enough to determine $f$. Finally, even if two values of $f$ are far apart, say $f(1)$ and $f(-4)$ are fixed, then one can use the elongated vector $v'_n = [f(n-4),f(n-3),f(n-2),f(n-1),f(n),f(n+1)]$, for example, to see that every possible $v'_n$ is determinable from a single one. However, the matrices won't be invertible : they'll be of rank $2$, and furthermore the columns corresponding to the first and last entries will span the column space. Therefore, you will see that fixing $f(1),f(-4)$ will end up fixing $f$ everywhere.
That still doesn't "exactly" explain why two constants are involved, but something slightly more involved can : the notion of a vector space.
Indeed, let $\mathcal F$ be the set of all (real valued) $f$ such that $f(X_n)$ is a martingale (for natural filtration). Then : $f$ is closed under addition and scalar multiplication, because the conditional expectation respects these operations. Therefore $\mathcal F$ is a vector space over $\mathbb R$.
Define the maps $e_k : \mathcal F \to \mathbb R$ given by $e_k(f)= f(k)$. Then, the $e_k$ are linear transformations. Now $\ker e_0 \cap \ker e_1 = \mathcal F$, because an easy inductive argument will tell you that if $f \in \mathcal F , f(0)=f(1)=0$ then $f=0$ everywhere. So $\dim \mathcal F$ is at most $2$, because $(e_0,e_1) : \mathcal F \to \mathbb R^2$ is injective. However, it is equal to $2$ : that's because each of $e_0,e_1$ by itself is an injective map into $\mathbb R$ ,therefore is of rank $1$ (and they are linearly independent over $\mathcal F$).
It follows that if $f_0,f_1$ is a basis of $\mathcal F$, then every function in $\mathcal F$ will be given by $C_0f_0+C_1f_1$ for constants $C_0,C_1$.
This is indeed the case : if $p=\frac 12$ in the special case, then $f_0(k) = 1$ and $f_1(k) = k$ is a basis for $\mathcal F$, and the way you've written it reflects this.
A function $f$ such that $f(X_n)$ is a martingale with respect to some specified filtration (usually natural) is called harmonic with respect to $X_n$. In this particular case, $X_n$ is a "random walk" on a weighted graph : the graph is $\mathbb Z$ with consecutive integers connected, and the weights are $p_i,q_i$ in the specified directions.
One can talk about recurrence or transience of a weighted (connected) graph : if $X_n$ is the random walk on this graph beginning at a point $x_0$, then the graph is said to be recurrent if the probability of $X_n$ returning to $x_0$ in finite time is $1$. Otherwise (when that probability can be smaller than $1$), it's said to be transient.
A nice theorem is the following : if the random walk on a graph admits a non-constant, bounded harmonic function then the graph is transient (this is a nice result, it doesn't go the other way). This is why calculating harmonic functions on graphs is a useful task. This theorem is often used in conjunction with combinatorial tools to show the famous result that the random walk (with all $p$ equal to $\frac 1{2^d}$) on $\mathbb Z^d$ for $d \geq 3$ is transient. It is recurrent for $d=1,2$, although that is shown by more standard methods.