Confusion on why CDF suffices to show "$X$ is so and so distributed"

43 Views Asked by At

This question is not related to any one question. In our class, we are often asked to show that a random variable, say $X$ has a particular distribution. That could be:

$X$ ~ $\mathcal{N}(0,1)$; $X$ ~ Uniform$[a,b]$; $X$ ~ $\exp(\lambda)$

In most cases, we got about using the Cumulative Distribution Function, to say that

If $F_{X}(c)=d$ where $d$ has some particular trait (think of $X$ ~ $\exp(\lambda)$ and thus $F_{X}(c)=1-e^{-\lambda c}$)

We can then say $X$ ~ $G$ where $G$ is some kind of distribution.

But what is the reasoning as to why I can make judgements of the distribution of a random variable by only looking at its cumulative distribution function? Is it simply an extension of the Uniqueness Theorem of measures from measure theory, since $\{]-\infty,c]:c\in \mathbb R\}$ is a $\cap-$stable generator of $\mathcal{B}(\mathbb R)$.

But would this not imply that for $F$ as the space of CDFs and $R$ the space of Distributions:

$\varphi: F \to R$ is bijective? So, each distribution has a unique CDF.

1

There are 1 best solutions below

0
On

The CDF does uniquely determine a real-valued univariate random variable's distribution. Let $F$ denote the CDF of such a variable $X$, so $$a<b\implies P(X\in (a,\,b])=P(X\le b\land X\not\le a)=F(b)-F(a).$$(Note that $a$ can be $-\infty$, and $b$ can be $\infty$.) We can also address other intervals (a term taken herein to include half-lines and $\Bbb R$), viz.$$P(X\in [a,\,b])=P(X\le b\land x\not\lt a)=F(b)-\lim_{x\to a^-}F(x),$$and similarly$$P(X\in (a,\,b))=\lim_{x\to b^+}F(x)-F(a),\,P(X\in [a,\,b))=\lim_{x\to b^+}F(x)-\lim_{x\to a^-}F(x).$$This also determines $P(X\in S)$ for any set $S$ expressible as the union of countably many intervals. In particular, this exhausts a $\sigma$-algebra.