I think there is a well known theorem that says that if two random variables are independent than $$E[XY] = E[Y]E[Y]$$
My question would be how do we compute $E[XY]$ if it so happened that our two random variables are not independent?
I think there is a well known theorem that says that if two random variables are independent than $$E[XY] = E[Y]E[Y]$$
My question would be how do we compute $E[XY]$ if it so happened that our two random variables are not independent?
On
In general, you can't say anything about $\mathbb{E}[XY]$ given the expectations of $X$ and $Y$. You can find examples where $X,Y$ have finite expectations but $XY$ does not have an expectation (or an infinite one) (think about a distribution with finite mean but infinite variance)! You need to know something about the joint distribution of $(X,Y)$. In the case that $X$ and $Y$ are independent, the information you have about the joint distribution is that $$\mathbb{P}_{(X,Y)} = \mathbb{P}_X \otimes \mathbb{P}_Y$$
By computing the distribution of $XY$. There really is no other way. Here is a nice exercise. Let $X$ and $Y$ be two random variables such that both $X$ and $Y$ can only take the values $0$ and $1$, and both $X$ and $Y$ have expectation $1/2$.
What is the lowest value for $E[XY]$ you can achieve by some devious conspiracy between the distributions of $X$ and $Y$? What is the highest value of $E[XY]$ you can get?
EDIT: 'no other way' is put perhaps a bit strongly. Wikipedia suggests using this formula: $E[XY] = E_Y[E_{XY|Y}[XY|Y]]$ which would be an example of an 'other way' in the somewhat-weird-sounding-but-probably-occurring-in-some-natural-examples scenario that you do not know the unconditional distribution of $XY$ but do know the distribution of $XY$ for each given value of $Y$ (as well as the distribution of $Y$)
However the point of my exercise was to convey something else: that quite a variety (although not unlimited!) of values for $E[XY]$ is possible even when $E[X]$, $E[Y]$ and some other constraints on $X$ and $Y$ are known.