Understanding sum rule for marginal probability

1.7k Views Asked by At

If $p(x,y)$ is the joint distribution of two discrete random variables $x, y$. The sum rule states that: $$p(x) = \sum_{y \in T} p(x,y)$$

Where $T$ are that states of the target space of random variable $Y$

As per my understanding, this is basically the law of total probability. If events associated with target space of Y are a partition of the outcome space $\Omega$. We can calculate the probability of $x$ (marginal) regardless of $y$ (please correct me if there is something not accurate).

Now, my issue is with the other form of the sum rule (for continuous random variables): $$p(x) = \int_{T} p(x,y) dy$$

It seems logical to me, but I want to understand how can we end up with this form for continuous r.v., so any pointers?

2

There are 2 best solutions below

0
On

The keyword is the probability measure you define on the space $T$. The last notation is the most general one in probability theory. When $T$ is discrete and has the Dirac point measure, it reduces to a sum. The measure tells you how elements of the set $T$ are to be "counted in" when computing probabilities. The measure here is $dy$ i.e. an uniform democratic probability for all elements in $T$.

0
On

It seems to me to follow from the definition of a joint density $p(x,y)$, which is

$$P(X \in A, Y \in B) = \int_A \int_B p(x, y) \, dy \, dx$$

and the definition of the marginal density $p(x)$, which is

$$P(X \in A) = \int_A p(x) \, dx$$

If $T$ are all the possible values of $Y$, then, since $P(X \in A) = P(X\in A, Y \in T)$, and using the definition of a joint density:

$$P(X \in A) = \int_A \int_T p(x, y) \, dy \, dx$$

Notice that this is just the definition of the marginal density, but with $p(x)$ replaced with $\int_T p(x, y) dy$. So, that must be the marginal density.