I am interested in understanding conditional law and a probability kernel. In the book Foundations of Modern Probability, we have Theorem $8.5$.
Theorem $8.5$ (conditional distributions, disintegration) Let $\xi, \eta$ be random elements in $S, T$, where $T$ is Borel. Then $\mathcal{L}(\xi, \eta)=\mathcal{L}(\xi) \otimes \mu$ for a probability kernel $\mu: S \rightarrow T$, where $\mu$ is unique a.e. $\mathcal{L}(\xi)$ and satisfies
(i) $\mathcal{L}(\eta \mid \xi)=\mu(\xi, \cdot)$ a.s.,
(ii) $E\{f(\xi, \eta) \mid \xi\}=\int \mu(\xi, d t) f(\xi, t)$ a.s., $\quad f \geq 0$.
As a context, we also have for any measurable spaces $(T, \mathcal{T})$ and $(S, \mathcal{S})$, a kernel $\mu: T \rightarrow S$ is defined as a function $\mu: T \times \mathcal{S} \rightarrow \overline{\mathbf{R}}_{+}$, such that $\mu(t, B)$ is $\mathcal{T}$-measurable in $t \in T$ for fixed $B$ and a measure in $B \in \mathcal{S}$ for fixed $t$. In particular, $\mu$ is a probability kernel if $\mu(t, S)=1$ for all $t$. Random measures are simply kernels on the basic probability space $\Omega$.
We even have the following definitions. Now fix a $\sigma$-field $\mathcal{F} \subset \mathcal{A}$ and a random element $\eta$ in a measurable space $(T, \mathcal{T})$. By a (regular) conditional distribution of $\eta$, given $\mathcal{F}$, we mean ar $\mathcal{F}$-measurable probability kernel $\mu=\mathcal{L}(\eta \mid \mathcal{F}): \Omega \rightarrow T$, such that
$$ \mu(\omega, B)=P\{\eta \in B \mid \mathcal{F}\}_\omega \text { a.s., } \quad \omega \in \Omega, \quad B \in \mathcal{T} \text {. } $$
Still, I am puzzled by (i) of Theorem $8.5$. Name $\mathcal{L}(\eta \mid \xi)=\mu(\xi, \cdot)$. Here $\mathcal{L}(\eta \mid \xi)$ is the conditional law. How to use this to do calculations? For example, what will it be like if we are given a new random variable $X$, and we want to calculate the probability of $X \leq x$ under the conditional law $\mathcal{L}(\eta \mid \xi)$ using $\mu(\xi, \cdot)$?
Thank you very much.