Domain of a random variable

711 Views Asked by At

Let $N\sim\text{Po}(\lambda)$ and $X_1,\ldots,X_n,\ldots$ be independent random variables $\text{Ber}(p)$. Consider $S=X_1+\ldots+X_N$. I saw in class that $S\sim \text{Po}(\lambda p)$.

What I don't understand is in which proability space $(\Omega,P)$ we're working. I learned that, if you have a Poisson random variable $Y$, then $\Omega=\mathbb{N}$, $Y(\omega)=\omega$ and $P(\{\omega\})=e^{-\lambda}\lambda^{\omega}/\omega!$. In the case of $X_i\sim\text{Ber}(p)$, I suppose that $\Omega=\{0,1\}$, $X(\omega)=\omega$ and $P(\{1\})=1$.

Is it true that, when you have a discrete random variable, you define it as the identity?

Which is the domain of $S$? It doesn't make sense to write $S(\omega)=X_1(\omega)+\ldots+X_{N(\omega)}(\omega)$, because $X$ and $N$ have different domains.

As you can see, I'm completely confused.

2

There are 2 best solutions below

2
On BEST ANSWER

If suitable probability spaces $\langle\Omega_i,\mathcal A_i,P_i\rangle$ exist for independent random variables $Y_i:\Omega_i\to\mathbb R$ then we can always construct product space $\langle\Omega,\mathcal A,P\rangle$ with projections $\pi_i:\Omega\to\Omega_i$.

If we define $Z_i:=Y_i\circ\pi_i:\Omega\to\mathbb R$ then $Z_i$ has the same distribution as $Y_i$ and can take its place.

So we reach the same situation plus the fact that the rv's are all defined on the same probability space.


Also the distribution of a random variable will ask for sufficient conditions on the space, but these conditions are not necessary in e.g. the sense that $\Omega=\mathbb N$ is needed if we are dealing with Poisson distribution.

1
On

A sufficient sample space for your experiment could be $\Omega = \Bbb N \times \{0,1\}^\infty$, the Cartesian product of the natural numbers and an indefinite series of Bernoulli outcomes.   It's not necessarily the only one that could describe the experiment, but is it a usefull one since you can clearly see how the random variables of interest would map from it.   (Another viable sample space would be $\{0\}\cup\Bbb R^+$, though this is somewhat less than obvious (it has to do with expressing the fraction part in base 2).)

Who really cares? All we require is there is that there is some probability space that can be measured by the random variables such that their probability mass functions are as described.


Now, we require the random variable $N$ to map $\Omega\mapsto\Bbb N$ such that the probability measure gives the result: $$\mathsf P(N^{-1}\{n\})~=~\dfrac{\lambda^n e^{-\lambda}}{n!}~\mathbf 1_{n\in\Bbb N}$$

Where $\forall n\in\Bbb N:N^{-1}\{n\}=\{\omega\in\Omega\mid N\{\omega\}=n\}$.

Although we more often write this as simply:$$\mathsf P(N=n)~=~\dfrac{\lambda^n e^{-\lambda}}{n!}~\mathbf 1_{n\in\Bbb N}$$


Also, the series of random variables $(X_i)_{i\in\Bbb N^+}$ maps $\Omega\mapsto \{0,1\}$ and the (same) probability measure must be such that for all $i\in\Bbb N^+$ we have: $$\mathsf P(X_i^{-1}\{0\})~=~1-p~,~ \mathsf P(X_i^{-1}\{1\})=p$$

Aka:

$$\mathsf P(X_i=x) ~=~ p~\mathbf 1_{x=1}+(1-p)~\mathbf 1_{x=0}$$


Then we can define the random variable $S:\Omega\mapsto\Bbb N$ via $\forall \omega\in\Omega~:~S\{\omega\}=\sum_{i=1}^{N\{\omega\}} X_i\{\omega\}$

Which is usually abbreviated as $S=\sum_{i=1}^{N} X_i$

And without reference to $\Omega$, whatever it may be, we can show that, due to mutual independence of $(N, X_1, X_2, ...)$, and the identicality of $(X_i)$:

$$\mathsf P(S=s) = \sum_{n=s}^\infty \mathsf P(N=n)\binom n s \mathsf P(X_i=1)^s \mathsf P(X_i=0)^{(n-s)}~\mathbf 1_{s\in \Bbb N}$$

Which simplifies to a rather familiar expression.