Why is independent condition needed in $Pr(X=Y)=0$

128 Views Asked by At

I know that if $X,Y $ are independent random variables then $Pr(X=Y)=0$. It kind of make sense to me to ask for independence since $Pr(X=X)=1$. I know the proof of this when $ X,Y$ are discrete but when trying to prove it for continuous independent variables I came out with the fact that I don't need independence. This is offcourse wrong but I can't find my mistake. I will leave my "proof" here so you guys can point the mistake out:

The fake proof:

$$Pr(X=Y)=Pr((X,Y)\in \{(x,y):x=y\})=\underset{\{(x,y):x=y\}}\iint f_{X,Y}(x,y)\;dx\;dy $$ $$ = \int_x^x \int_{-\infty}^\infty f_{X,Y}(x,y)\;dx\;dy \underbrace{=}_{\text{Fubini}} \int_{-\infty}^\infty \int_x^x f_{X,Y}(x,y)\;dy\;dx=0 $$

I can kind of guess that my mistake might be somewhere near the word "Fubini" because, probably, I'm missing the right hipotesis for use this theorem. Also, I'd say that it should be important to use te fact that $f_{X,Y}=f_X f_Y $ since $X \perp Y $, but I still don´t get it.

Thank you all in advance

1

There are 1 best solutions below

3
On BEST ANSWER

If $X,Y$ have a common PDF $f_{X,Y}$ then $P(X=Y)=0$.

For this we do not need independence.

If $[x=y]=1$ if $x=y$ and $[x=y]=0$ otherwise then:$$P(X=Y)=\mathbb E[X=Y]=\int\int[x=y]f_{X,Y}(x,y)\;dx\;dy=\int0\;dy=0$$

Things are different if $X,Y$ have a common PMF $p_{X,Y}$.

In that case: $$P(X=Y)=\mathbb E[X=Y]=\sum_{(x,y)\in S}[x=y]p_{X,Y}(x,y)=\sum_{(z,z)\in S}p_{X,Y}(z,z)$$where $S$ denotes the countable set $\{(x,y)\in\mathbb R^2\mid p_{X,Y}(x,y)>0\}$.

If $X$ and $Y$ are moreover independent and have PMF $p_X$ and $p_Y$ respectively then $S=S_X\times S_Y$ where $S_X=\{x\in\mathbb R\mid p_X(x)>0\}$ and $S_Y=\{y\in\mathbb R\mid p_Y(y)>0\}$ .

So in that case we have:$$P(X=Y)=\mathbb E[X=Y]=\sum_{z\in S_X\cap S_Y}p_{X}(z)p_Y(z)$$ where the RHS is positive if $S_X\cap S_Y\neq\varnothing$.