I don't quite understand the definition of a probability kernel (or Markov kernel).
Is this correct, the reason to introduce this a transition kernel is, if we have a source $(X,\mathcal{A})$ and a target $(Y,\mathcal{B})$, both measurable spaces, we want to have a new measurable space $(X,\mathcal{B})$? There is this example on Wikipedia for a random walk:
Take $X=Y=\mathbb{Z}$ and $\mathcal A = \mathcal B = \mathcal P(\mathbb{Z})$, then the Markov kernel $\kappa$ with $$\kappa(x,B)=\frac{1}{2}\mathbf{1}_{B}(x-1)+\frac{1}{2}\mathbf{1}_{B}(x+1), \quad \forall x \in \mathbb{Z}, \quad \forall B \in \mathcal P(\mathbb{Z})$$ describes the transition rule
But I don't understand it. Why are we using here $Y$ and $\mathcal{A}$? Is the measurable space $(Y,\mathcal{B})$ the next position on the random walk with new event from $\mathcal{B}$?
For two measurable space $(E,\mathcal{E})$ and $(F,\mathcal{F})$, we call a mapping $\kappa:E\times\mathcal{F}\rightarrow \mathbb{R}$ a kernel, if $\kappa(x,.):\mathcal{F}\rightarrow\mathbb{R}$ is a measure for all $x\in E$ and $\kappa(.,B):E\rightarrow\mathbb{R}$ is $\mathcal{E}-\mathcal{B}(\mathbb{R})$-measurable for all $B\in\mathcal{F}$. It is called a Markov kernel, if in addition $\kappa(x,.):\mathcal{F}\rightarrow[0,1]$ is a probability measure.
Your example just says that if you are in state $x\in\mathbb{Z}$, then you can jump to state $x+1$ or $x-1$ and you do this with probability $1/2$ resp (since $\kappa(x,\{x+i\})=0$ for $i\in\mathbb{Z}\backslash \{1;-1\}$).