There seem to be two different definitions of a regular conditional distribution, but they seem to define a slightly different kernel function. How are they connected? Definition 1 and Definition 2.
Set Up
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, $(\mathsf{E}, \mathcal{E})$ a measurable space, $X:\Omega\to\mathsf{E}$ a random variable with distribution $P_X = \mathbb{P}\circ X^{-1}$, and $\mathcal{G}\subseteq \mathcal{F}$ be a sub-sigma algebra.
Definitions
Definition 1: A kernel $k:\Omega\times \mathcal{E}\to [0, 1]$ that is a version of $\mathbb{E}[\mathbb{1}_{X\in A} \mid \mathcal{G}](\omega) = k(\omega, A)$.
Definition 2: A kernel $k:\mathsf{E}\times\mathcal{F}\to[0, 1]$ satisfying $\mathbb{P}(A\cap X^{-1}(B)) = \int_B k(x, A) d P_X(x)$.
It almost seems that in the second definition one uses the inverse of $X$ as the random variable of interst, but this doesn't necessarily exist.
MAJOR EDIT
There is something really odd at play. I think there are actually many different definitions but people seem to not distinguish between them.
- Definition 1 is definitely a definition for a regular conditional distribution of a random variable given a sub-sigma algebra. That is $\mathbb{P}(X\in A \mid \mathcal{G})$
- Definition 2 seems to be something like a regular conditional probability of a set given a random variable. That is $\mathbb{P}(A \mid X = x)$.
In definition 1, $\kappa$ is a regular conditional distribution of $X$ given $\mathcal{G}$.
In definition 2, $X^{-1}(B) = \{\omega \in \Omega : X(\omega) \in B\} = \{X \in B\}$ is called the inverse image of $B$ under $X$. Definition 2 says that $P(A \mid X) = \kappa(X, A)$. So we might say that $\kappa$ is a regular conditional probability given $X$. If you define $W : \Omega \to \Omega$ by $W(\omega) = \omega$, then $\tilde{\kappa}(\omega, A) := \kappa(X(\omega), A)$ is the conditional distribution of $W$ given $\sigma(X)$. Hence definition 2 can be seen as a special case of definition 1.
Edit: I'll elaborate on definition 2. Assume definition 2 holds. Then for every $B \in \mathcal{E}$ and $A \in \mathcal{F}$, \begin{align} E(1_{A}1_{B}(X)) &= P(A \cap \{X \in B\}) \\ &= \int_{B}\kappa(x, A)P(X \in dx) \\ &= \int_{E}\kappa(x, A)1_{B}(x)P(X \in dx) \\ &= \int_{\Omega}\kappa(X(\omega), A)1_B(X(\omega))P(d\omega) \\ &= E(\kappa(X, A)1_{B}(X)) \end{align} This means that $E(1_{A} \mid X) = \kappa(X, A)$. This means that $$P(W \in A \mid X) = \kappa(X, A).$$ So $\kappa(X(\cdot), \cdot)$ is a regular conditional distribution of $W$ given $\sigma(X)$.