Let $f(x_1,\cdots,x_n,w)$ be a function from $R^{n+1}\rightarrow R$, where $x_1,\cdots,x_n$ are deterministic variables, and $w$ be random variable. As a simple example, $f$ can be $(x_1+\cdots+x_n)w$. For simplicity, I would just write $f(\vec x,w)$ in the following context.
Is there any difference between the following two?
(1) $P(f(\vec x^*,w)>0)\rightarrow 1$ and $P(f(\vec x,w)>0)\rightarrow 1$ when $n$ goes to infinity.
(2) $P\big(\min(f(\vec x^*,w),f(\vec x,w))>0\big)\rightarrow 1$ when $n$ goes to infinity.
I encountered this question while attempting to prove a random function $f(\vec x,w)$ is positive with high probability (probability of $f>0$ goes to 1), where $\vec x\in A\subset \mathbb{R}$ is a variable (deterministic), and $w$ is a random variable.
Initially, I began by proving $P(\inf_{\vec x\in A} f(\vec x,w))\rightarrow 1$ using an $\epsilon$-net argument, as the region $A$ is not discrete. However, I recently realized that, for any $x\in A$, the probability $P(f(\vec x,w))>0\rightarrow 1$. Therefore, it seems unnecessary to go through the effort of proving $P(\inf_{\vec x\in A} f(\vec x,w))\rightarrow 1$ since I already know that the function is positive at all points with high probability.
I am uncertain about which statement I should focus on proving. I am also confused on when should we use epsilon net argument. Consequently, I am curious to determine the precise distinction between (1) and (2).
Thanks for any suggestion!
You have sequences of random variables $(X_n)$ and $(Y_n)$ and you are wondering whether
(1) $P(X_n > 0) \to 1$ and $P(Y_n > 0) \to 1$
and
(2) $P(\min(X_n,Y_n)>0) \to 1$
are equivalent. The answer is yes.
Proof. Note that $\min(X_n, Y_n) \leq X_n$ and $\min(X_n, Y_n) \leq Y_n,$ so $P(\min(X_n, Y_n) > 0) \leq P(X_n > 0)$ and $P(\min(X_n, Y_n) > 0) \leq P(Y_n > 0).$ This shows (2) implies (1).
Suppose now (1) and let $\varepsilon \in (0, 1).$ Then, with probability at least $1-\frac{\varepsilon}{2},$ there exists an $N$ such that for every $n > N,$ the events $\{X_n > 0\}$ and $\{Y_n > 0\}$ have probabilities at least $1 - \frac{\varepsilon}{2}.$ If $A$ and $B$ are events, then the inclusion-exclusion formula shows that $P(A \cap B) = P(A) + P(B) - P(A \cup B) \geq P(A) + P(B) - 1.$ Then, $P(\min(X_n, Y_n)>0) = P(\{X_n > 0\} \cap \{Y_n > 0\}) \geq 1 - \varepsilon,$ for $n \geq N.$ Thus, (2) follows. QED
The above proof can easily be extended (by induction say), to the case of a finite number of sequences but it will fail for infinitely many sequences.