Assume you have two continuous random varaibles $X,Y$. Also assume that their joint probability density function can be written
$f(x,y)=p_1(x)p_2(y)$,
must they then be independent?
The problem is that we don't know if $p_1$ and $p_2$ are the marginal distributions as well. It seems natural that they should be independent. But I do not see how to prove this. Is there a simple way to prove this, or is there a counter-example?
The definition of two events being independent is that $$ P(A\cap B) = P(A) P(B) $$ The definition of two random variates $X$ and $Y$ being independent is that for all $x, y$ the the events $(X\leq x)$ and $(Y\leq y)$ are independent events. It is pretty easy, then, to use the above definition of independent events to show that this implies that the cumulative distributions satisfy $$ F_{X,Y}(x,y)= F_X(x)\,F_Y(y) $$ Also, given that $F_{X,Y}(x,y)= F_X(x)\,F_Y(y)$, it is easy to see that the event-specified definition of independence is met. So let's call this the CDF independence criterion -- it is equivalent to the by-definition independence criterion.
In turn, assuming only that the probability densities $f_X(x)$ and $f_Y(y)$ and the joint probability density $f_{X,Y}(x,y)$ all exist, it is straightforward to see that $$ f_{X,Y}(x,y)= f_X(x)\,f_Y(y) $$
The zinger, of course, is that there are some cumulative distribution functions which don't have corresponding pdfs, but we can ignore that annoyance because we are given that $$ f_{X,Y}(x,y)= p_1(x)p_2(y) = f_X(x)\,f_Y(y) $$
Now if $p_1(x)$ and $p_2(y)$ are both integrable, you can integrate to obtain the CDF independence criterion, showing that if $f_{X,Y}(x,y)= p_1(x)p_2(y)$ for all $x,y$ then $X$ and $Y$ are independent.
But we know that $p_1(x)$ and $p_2(y)$ are both integrable, because by definition a density is a non-negative Lebesgue-integrable function.
In an arbitrary measurable space, this property still holds, since again by definition a densityis a measurable function.