As part of my learning of Markov processes, I am trying to understand a few examples. I would like to compute the transition probability (kernel) of that process :
$$x_{n+1} = \alpha x_n + \beta \xi_{n+1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$$ where the $\xi_n $ are iid normal $(0,1)$.
The answer shall be :
$$ P(x,dy) = \frac{1}{\sqrt 2 \pi} e^{ \frac{-(y-\alpha x)^2 }{2 \beta^2} } dy \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$$
My problem is that, I am used to compute $P_{ij} = P(x_{n+1} = i | x_n = j)$. However, I am not used to the continuous case yet.
What I know is that $$P(x_n,A) = P(x_{n+1} | x_n \in A ) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$$ for markov chains. My first step was to prove that $(1)$ is a M.C. (succesfully).
But then I am stuck.
So my question would be :
how can one use $(2)$ to prove the answer $(*)$, and would it be possible to start from the definition ( $ P(x_n,A) $ ) of transition probabilities to compute $(*)$ ?
Any help is welcome. Thank you for reading !
$(2)$ is incorrect. In your notation, it should be $$P(x_n, A) = P(x_{n+1} \in A | x_n).$$ This notation can be confusing because it does not distinguish between the transition kernel $P$ (on the left-hand side) and the probability $P$ (on the right-hand side). Based on what you say,
you are comfortable thinking of $P_{ij}$ as a matrix. That is fine. With such a matrix, check that defining $$\mathbb{P}(X_{n+1} = j | X_n =i)=P_{ij}\tag{*}$$ implies $\mathbb{P}(X_{n+1} \in A | X_n = i) = \sum_{j\in A} P_{ij}$. Here I use $\mathbb{P}$ as a probability, to distinguish it from $P$ the matrix.
The same idea will hold for continuous-space Markov chains. Once you have shown that $(1)$ defines a Markov chain, you know it suffices to determine its transition kernel. Given $x_n \in \mathbb{R}$, can you see that $\alpha x_n + \beta \xi_{n+1}$ is a random variable with mean $\alpha x_n$ and variance $\beta^2$? If so, then for any open set $A$, \begin{align*} \mathbb{P}(x_{n+1} \in A | x_n) &= \mathbb{P}(\alpha x_n + \beta \xi_{n+1} | x_n) \\ &= \int_A \frac{1}{\sqrt{2\pi\beta^2}}e^{-(y-\alpha x_n)^2 / 2\beta^2} dy. \end{align*} A shorthand notation for this is $$\mathbb{P}(x_{n+1} \in dy | x_n) = \frac{1}{\sqrt{2\pi\beta^2}}e^{-(y-\alpha x_n)^2 / 2\beta^2} dy.$$
To understand this answer, it may first be a good idea to focus on why $(2)$ is wrong.
EDIT: I will use standard notation with random variables being denoted by capital letters.
To be clear, a discrete-space Markov chain is defined by the property $$\mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n, \dots, X_1 = x_1) = \mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n).\tag{**}$$ This is sometimes read as "the future (here, $X_{n+1}$), given the present ($X_n$), is independent of the past ($X_{n-1}, \dots, X_1$)". If you define $X_{n+1}$ through a transition kernel or matrix as I did in (*), then (**) will follow. However, you can compute $$\mathbb{P}(X_{n+1} = x_{n+1} | X_n = x_n)$$ for any stochastic process, not just a Markov chain.