Understanding proof involving continuity and monotonicity

116 Views Asked by At

Context

Suppose that we want to test $m$ hypotheses simultaneously and for each hypothesis, we construct a test statistic, $T_i$, $i=1,...,m$ based on the data. We make a decision for each hypothesis (i.e. we reject/fail to reject the null) based on thresholding the test statistic for that hypothesis. Such a rejection rule is of the form $I(|T_i|\geq t)$ for some critical value of $t$ that satisfies some multiple-testing condition. In this case, we want to control the false discovery/rejection proportion (a random quantity). This proportion is the number of false rejections, $mG(t)$, over the total number of rejections $R(t)$ to be less than some fixed, nominal $\alpha$) (terms defined below).

Setup

Suppose, also, that each $T_i$, $i=1,...,m$ have asymptotic standard normal distribution, $\Phi(t)$ and are independent. The two-sided $p$-value for a single $T_i$ is $G(t)=P(|T_i|\geq t)=2-2\Phi(t)$.

Let $R(t)=\sum_{1\leq i\leq m} I\{|T_{i}|\geq t\}$ and define for fixed $\alpha\in(0,1)$ $$ \hat{t}=\inf\left\{t\geq 0: G(t)\leq \frac{\alpha\max(R(t),1)}{m} \right\} $$

The proof then states that by the continuity of $G(t)$ and the monotonicity of the indicator function, $R(t)$, it is easy to see that $$ \frac{mG(\hat{t})}{\max(R(\hat{t}),1)}=\alpha $$

Question

Does this follow from the intermediate value theorem for continuous functions? How does the definition of $\hat{t}$ ensure the bottom expression is true when $G(t)$ and $R(t)$ are evaluated at $\hat{t}$? I'm trying to understand this part to extend this to the case when $G(t)$ is discrete rather than continuous.

The link to the paper containing the proof is here on page 2970.

2

There are 2 best solutions below

0
On BEST ANSWER

We have that $G(0) = 1$ and $R(0) = m$ hence $$G(0) = 1 > \alpha = \frac{\alpha m}{m} = \frac{\alpha \max\{R(0),1\}}{m}.$$ Therefore, we know that the curve $G$ begins above the curve $H$, where we define $$H(t) = \frac{\alpha \max\{R(t),1\}}{m}.$$

Now, because $$H(t) \geq \frac{\alpha}{m},$$ we know that there exists a finite $\bar{t}$ such that for any $t > \bar{t},$$$G(t) < H(t).$$

So, we know that $\hat{t}$ must exist and be finite. In particular, $\hat{t}$ is given by the first crossing of $G$ and $H.$ By contradiction, assume there exists some case where $$\inf\left\{t\geq 0 \mid G(t) \leq H(t)\right\} = \inf\left\{t\geq 0 \mid G(t) < H(t)\right\} = \hat{t}.$$ Then we have that $G(\hat{t}-\epsilon) > H(\hat{t}-\epsilon)$ for all $\epsilon>0,$ but $G(\hat{t}) < H(\hat{t}) < H(\hat{t}-\epsilon)$ (where the last inequality comes from the monotonicity of $H$). Therefore, as $\epsilon \to 0,$ we have that $G(\hat{t}-\epsilon)$ is bounded away from $G(\hat{t})$ which is impossible because $G$ is continuous.

Therefore, $\hat{t}$ always satisfies $$G(\hat{t}) = \frac{\alpha \max\{R(\hat{t}),1\}}{m} \Longrightarrow \frac{mG(\hat{t})}{\max\{R(\hat{t}),1\}} = \alpha.$$

0
On

Thanks to David for providing some nice insight into the problem. I'd like to build off of his work and approach it in a slightly different way.

Using David's notation let, $$ H(t)=\frac{\alpha\max (R(t),1)}{m} $$

Now for any $t<\hat{t}$, we have $G(t)>H(t)$. By monotonicity of $R(t)$, we have for $t<\hat{t}$, $G(t)>H(\hat{t}$). By letting $t\to\hat{t}$ and by continuity of $G(t)$, we have $G(\hat{t})\geq H(\hat{t})$.

By the definition of the infimum, there exists a sequence $t_k$ with $t_k\geq \hat{t}$ and $t_k\to\hat{t}$ and $G(t_k)\leq H(t_k)$. Again, by monotonicity of $R(t)$, $G(t_k)\leq H(\hat{t})$. By letting $t_k\to\hat{t}$ and continuity of $G(t)$, we get $G(\hat{t})\leq H(\hat{t})$.

Hence, $G(\hat{t})=H(\hat{t})$.