Let $X, Y$ be continuos random variables of densities $f_X, f_Y$.
Let $Z = \begin{pmatrix} X \\ Y \end{pmatrix}$. When is $Z$ continuos? And in this case, how to express its density with respect to $f_X$, $f_Y$?
- $X$ and $Y$ are independent: then $Z$ is continuos and $f_Z(x,y) = f_X(x)f_Y(y)$
- $Y = g(X)$, such that $g^{-1}$ exists. Then $Z$ is not continuos as has been showed here
Up until now it's clear. Now I have two questions:
- what happens if we relax some assumption on $g(x)$? What if $g(x)$ is only Borel (ie $Y$ is measurable with respect to $\sigma(X)$?)
- I suppose that if $Y$ is not $\sigma(X)$ measurable all bets are off, but I would like to know for sure :)
Thanks in advance! :)
Let $X$ and $Y$ be random variables such that, for some set $B$ of Lebesgue measure zero, $P\big(\left(X, Y\right)\in B\big) \ne 0$. Any restriction on $X$ and $Y$ that implies such a condition also implies that $(X,Y)$ cannot be a continuous random vector: the proof uses an essential observation in the answer you reference. Contradiction: assuming the joint density $f_{XY}$ exists, $$ 0\ne P\big((X,Y)\in B\big) = \int\limits_{B}f_{XY}\ \mathrm dA = 0. $$
In particular, if $Y$ is $\sigma(X)$-measurable, then, by the Doob-Dynkin lemma, $Y = g(X)$ for some Borel-measurable function $g$. Therefore, $(X,Y)$ is not a continuous random vector.
On the other hand, if the push-forward probability measure defined by $(X,Y)$ is absolutely continuous with respect to the Lebesgue measure, then the joint density exists and $(X,Y)$ follows a continuous distribution, by definition.
When a probabilistic model/experiment is defined, it is defined by an underlying probability space, $(\Omega, \Sigma, P)$. Intuitively, $\Omega$ is the set of all possible outcomes of the experiment, $\Sigma$ is the set of all possible events that can be observed in the experiment (an event is a collection of outcomes) and and $P:\Sigma\to[0,1]$ is a function that assigns probability values to events.
If, in addition, a random vector $X:\Omega\to\mathbb{R}^2$ is defined (that is, for any Borel set $B\in\mathcal{B}(\mathbb{R}^2)$, $X^{-1}(B)\in\Sigma$), then we may define the push forward measure $P_{X}: \mathcal{B}(\mathbb{R}^2)\to[0,1]$ by $$ P_{X}(B):=P\big(X^{-1}(B)\big),\ \forall\ B\in\mathcal{B}(\mathbb{R}^2) $$ That is, from the aforementioned (possibly abstract) probability space, the random vector has induced a new probability space $(\mathbb{R}^2, \mathcal{B}(\mathbb{R}^2), P_{X})$. And, like the Lebesgue measure, $P_{X}$ assigns real numbers to Borel sets.
$P_{X}$ is absolutely continuous with respect to the Lebesgue measure if, whenever $B$ is a Borel set of Lebesgue measure zero, this implies $P_{X}(B)=0$. This is the key observation for knowing when a random vector is continuous or not: if this absolute continuity property is not satisfied, then the existence of a density (which is required, by the definition of a continuous random vector) leads to a contradiction, as shown above. Alternatively, if the property is satisfied, then the joint density is guaranteed to exist, as referenced above, by the Radon-Nikodym Theorem.
For references, any good book that presents a measure theoretic approach to probability theory should have these details. For instance, Rosenthal or Ash.