Whilst reading this answer by Stefan. I am not entirely convinced why this equality holds:
$$\mathrm{E}[\mathrm{e}^{tU}]=\mathrm{E}[\mathrm{e}^{tX}\mathbf{1}_{Z=1}]+\mathrm{E}[\mathrm{e}^{t Y}\mathbf{1}_{Z=0}]$$
My confusion is, since $U=XZ+Y(1-Z)$, would we not have $E[e^{tU}]=E[e^{tXZ}]E[e^{tY(1-Z)}]$ instead?
However, if I robustly define $Z$ such that $\mathbb{P}(Z=X)=\frac{1}{2}$ and $\mathbb{P}(Z=Y)=\frac{1}{2},$ wouldn't we achieve the same aim? That is $M_Z(t)=\frac{1}{2}(M_y(t)+M_Y(t)).$
Presumably my answer is not correct since I have not seen anyone commenting this, hence my question is why is my answer not correct? Why did Stefan introduce a brand new random variable (Z)?
If you define $Z$ such that $P(Z=X)=P(Z=Y)=\frac12$, you can get this far: $$ \begin{aligned} M_Z(t)=E(e^{tZ})&=E(e^{tZ}I_{(Z=X)}+e^{tZ}I_{(Z=Y)})\\&=E(e^{tZ}I_{(Z=X)}) + E(e^{tZ}I_{(Z=Y)})\\&=E(e^{tX}I_{(Z=X)})+E(e^{tY}I_{(Z=Y)}) \end{aligned} $$ But from this point you cannot deduce $$ E(e^{tX}I_{(Z=X)}) = E(e^{tX})P(Z=X)\tag{*} $$ without an additional assumption. For example, you could conclude (*) if you knew that $X$ was independent of the event $\{Z=X\}$. (This would be true in the example given by Stefan, but notice that your $Z$ is his $U$.)