I'm trying to prove the following from Lehman's "Elements of Large Sample Theory"
Lemma 2.3.1: If the sequence $\{Y_n, n=1,2,\ldots\}$ is bounded in probability and if $\{C_n\}$ is a sequence of random variables tending to $0$ in probability, then $C_n Y_n \xrightarrow{P} 0 $
(The notation "$\xrightarrow{P}$" indicates convergence in probability.)
Here was my initial attack: We know that for any epsilon, sufficiently large $n$ and suitable $K$ that (1) $$P\left(| Y_n | \leq K\right) > 1 - \epsilon_0$$
Similarly, we know that for any $\epsilon_1, \epsilon_2 > 0$, sufficiently large $n$ that (2)
$$ P\left(|C_n| < \frac{\epsilon_1}{K}\right) > 1 - \epsilon_2 $$
Here's where I run into trouble. If we assume independence of the two sequences, we should be able to multiply the probabilities (1) and (2) above to get:
$$ P(|Y_n C_n| < \epsilon_1) > (1-\epsilon_0) (1-\epsilon_2) \to 1 $$
Since the choices of epsilons were arbitrary.
My problem is that what if we can't assume independence? So this proof strikes me as incorrect but probably my heart is in the right place. Any help is much appreciated. I'll think about it myself and make edits if I make any progress.
UPDATE 1:
I'm beginning to think that instead of considering $P((1) \land (2))$, we should perhaps consider$\overline {P((1) \land (2))}$, since $\overline {P((1) \land (2))} < P(\overline{(1)}) + P(\overline{(2)}$. Now we should be able to show $ P(\overline{(1)}) + P(\overline{(2)} \to 0$, without assuming anything about independence.