Conditioning on a null-event.

175 Views Asked by At

I am having soume trouble understanding the conditional probability on a null-event.

Assume you have a distribution which is Poisson conditioned on the parameter $\lambda$, and the parameter $\lambda$ is Gamma-distributed with parameters $\alpha,\beta$.

Then it is said that $P(N=n|\lambda=\lambda^*)=\frac{(\lambda^{*})^ne^{-\lambda^*}}{n!}$.

And if $g(\lambda^*)$ is the pdf of $\lambda$. Then it is said that

$P(N=n)=\int_0^\infty P(N=n|\lambda=\lambda^*)g(\lambda^*)d\lambda^*$.

But how is this formula actually proved? It is ok in the discrete case because then

$P(N=n|\lambda=\lambda^*)=\frac{P(N=n\cap\lambda=\lambda^*)}{P(\lambda=\lambda^*)}$. But we don't have this in the continuous case?

Are we assume something more when we say that $P(N=n|\lambda=\lambda^*)=\frac{(\lambda^{*})^ne^{-\lambda^*}}{n!}$?

1

There are 1 best solutions below

0
On

In the question, $N$ is random variable (discrete) and $\lambda$ is the other random variable (continuous). The conditional distribution of $N | \lambda$ is given, and the marginal distribution of $\lambda$ is given as well. Hence we have the joint distribution (recall $f_{X, Y}(x, y) = f_X(x)f_{Y|X}(y|x))$ for continuous r.v.).

$$ P_{N,\lambda}(n, \lambda^*) = P(N = n | \lambda = \lambda^*)g(\lambda^*). $$

And the marginal distribution for $N$ would be (recall $f_Y(y)=\int f_{X, Y}(x, y)dx$) $$ P(N=n) = \int P_{N,\lambda}(n, \lambda^*)d\lambda^*=\int_0^\infty P(N=n|\lambda = \lambda^*)g(\lambda^*)d\lambda^*.$$

One uses summation for discrete r.v and use integration for continuous r.v. The guiding principles are the same. The value of $P$ at $\lambda^*$ should be understood as density instead of probability. Or more precisely $P(N=n, \lambda\in [\lambda^*, \lambda^*+d\lambda^*))=P_{N,\lambda}(n, \lambda^*)d\lambda^*.$

To make the above statements rigorous, one would need to use measure-theoretical formulation where discrete and continuous distributions are unified.