How to properly define conditional probabilities on metric spaces?

169 Views Asked by At

I am an economist struggling to with the following problem. Let $\Theta$ be a subset of $\mathbb{R}$ endowed with the Borel $\sigma-$algebra $\mathcal{B}$ and let $\mu$ be a probability measure on $(\Theta,\mathcal{B})$. Moreover, let $\mathcal{Z}$ be some metric space endowed with the Borel $\sigma-$algebra (it would be also ok to assume that $\mathcal{Z}$ is a subset of $\mathbb{R}$). Let $s$ be a uniformly distributed random variable on $[0,1]$ and $f:[0,1]\times\Theta\rightarrow \mathcal{Z}$ a measurable function. Now let $Z$ be a random variable on $\mathcal{Z}$ generated by drawing a $\theta$ from $\mu$, drawing a $s$ from the uniform distribution and substituting them into $f$. How to I properly define the distribution of $Z$? How do I properly define the conditional probability $\mu(A|Z=z)$ for $A\in\mathcal{B}$?

1

There are 1 best solutions below

2
On BEST ANSWER

To make everything work out nicely, I will assume that your metric spaces are separable and complete. Presumably, you want $\theta$ and $s$ to be independently drawn. Then their joint distribution is given by the product measure $\mu\otimes\lambda$, where $\lambda$ is the uniform distribution on $[0,1]$. The product measure $\mu\otimes\lambda$ is the unique Borel probability measure such that for all Borel sets $E\subseteq\Theta$ and $F\subseteq [0,1]$, one has $\mu\otimes\lambda(E\times F)=\mu(E)\cdot\lambda(F)$ For a Borel set $B\subseteq Z$, the probability that the value $f(\theta,s)$ lies in $B$ is $$\mu\otimes\lambda\bigg(\Big\{ (\theta,s)\mid f(\theta,s)\in B\Big\}\bigg)=\mu\otimes\lambda\circ f^{-1}(B).$$
So the distribution of $f$ is $\mu\otimes\lambda\circ f^{-1}$ and this is then also the distribution of $Z$.

You also get a joint distribution on $\Theta\times [0,1]\times\mathcal{Z}$ which is given as the distribution of the function $\big(\theta,s)\mapsto(\theta,s,f(\theta,s)\big)$ under $\mu\otimes\lambda$. If we denote this function by $g$, it is simply $\mu\otimes\lambda\circ g^{-1}$. Let $\rho$ be the marginal of $\mu\otimes\lambda\circ g^{-1}$ on $\Theta\times\mathcal{Z}$. It gives you the joint distribution of $\theta$ and $z$. The $\mathcal{Z}$-marginal is $\mu\otimes\lambda\circ f^{-1}$, which we write as $\zeta$. Now the conditional probability of $A\subseteq\Theta$ given $Z=z$ is not really well defined. The classic approach to defining conditional probabilities is via the Kolmogorov definition of conditional expectation. The conditional probability of a set is then the conditional expectation of its indicator function, here $$\mathbb E\big[1_A\mid X\big],$$ a random variable that is only defined up to $\beta$-null sets. Now, given the assumption that $\Theta$ is separable and complete, we are also guaranteed the existence of a regular conditional probability, a function $\tau:Z\to\Delta(\Theta)$ with $\Delta(\Theta)$ the space of probability measures on $\Theta$ such that $z\mapsto \tau_z(A)$ is a conditional probability as defined above. However, we still cannot ignore $\beta$-null sets. Conditional probabilities can be defined, but are only unique up to $\beta$-null sets. In game theory, this makes life often hard.