Different definitions of independent random variables

74 Views Asked by At

Let $X$ and $Y$ be random variables with a joint density function. In some books, the independence of $X$ and $Y$ is defined as \begin{equation} P(X\in A,\ Y\in B)=P(X\in A)P(Y\in B) \tag{1} \end{equation} for all Borel subsets $A,B\subseteq\mathbb{R}$. In other books (and Wikipedia), the independence is defined as $$F_{X,Y}(x,y)=F_X(x)F_Y(y).\tag{2}$$ for all $x,y\in\mathbb{R}$, where $F$s are the cumulative distribution functions. Presumably they are equivalent. (1)$\Rightarrow$(2) is clear from the definition of $F$. For (1)$\Leftarrow$(2), it is easy to show that $$P(X\in I,\ Y\in J)=P(X\in I)P(Y\in J)$$ for all intervals $I,J\subseteq\mathbb{R}$. I'm guessing that this implies (1). Am I right? If so, I would appreciate an outline of proof, thank you.

1

There are 1 best solutions below

0
On BEST ANSWER

First fix $I$. Let $\mathcal C$ be those measurable sets $C$ such that $$ P(X \in I,\ Y \in C) = P(X \in I) P(Y \in C) .$$ Use Dynkin's $\pi$-$\lambda$ Theorem to see that $\mathcal C$ is a $\sigma$-algebra containing all intervals, and hence $\mathcal C$ contains all Borel sets.

Then fix a Borel set $B$, and let $\mathcal D$ be those sets $D$ such that $$ P(X \in D,\ Y \in B) = P(X \in D) P(Y \in B) .$$ Use Dynkin's $\pi$-$\lambda$ Theorem to see that $\mathcal D$ is a $\sigma$-algebra containing all intervals, and hence $\mathcal D$ contains all Borel sets.