In the proof of Neyman-Pearson from,
Let $ \pi _0,\pi _1 $ be two populations with probability distributions say $ f_0,f_1 $ (with measure $ \mu $ ). Then for testing $ H_0:f=f_0 $ against $ H_1:f=f_1 $ , we can define a test $ \phi $ with a constant $ k $ such that the expectation of the test under the null hypothesis $ H_0:f=f_0 $ denoted as $ E_0 $ is, $$ E_0\left[ \phi \left( x \right) \right] =\alpha, ~~~~~(level~of~significance) (9.5)$$
and,
$$ \phi \left( x \right) =\begin{cases} 1& when\ f_1\left( x \right) >kf_0\left( x \right)\\ 0& when\ f_1\left( x \right) <kf_0\left( x \right)\\ \end{cases} (9.6)$$
If our test $ \phi $ satisfies (9.5) and (9.6) for some $ k $ , then it’s most powerful test for $ H_0 $ against $ H_1 $ at level $ \alpha $ . What $ f_0\left( x \right) ,\ f_1\left( x \right) $ mean is that our parameters are defined in terms of input variables.
Casae#1: when $ 0<\alpha <1 $ . Let us define a quantity $ G\left( c \right) $ as, $ G\left( c \right) =P_0\left\{ f_1\left( x \right) >cf_0\left( x \right) \right\} $ where $ P_0 $ is under $ H_0 $ . Since $ G\left( c \right) $ is computed when $H_0 $ is true, so the inequality need to be considered only for the set when $ f_0\left( x \right) >0 $ , $P_0\left( \frac{f_1\left( x \right)}{f_0\left( x \right)}>c \right) $ so, $ 1-G\left( c \right) $ is a CDF of $ \frac{f_1\left( x \right)}{f_0\left( x \right)} $ , we can let $ y=\frac{f_1\left( x \right)}{f_0\left( x \right)}$ .
Question: And how the proof came up with in case $ 0<\alpha <1 $, then $ G\left( c \right) =P_0\left\{ f_1\left( x \right) >cf_0\left( x \right) \right\} $?
Edit: Second, since $1-G(c)$ is a CDF, why $G(c)$ is continuous on right and why $\lim_{c\to \infty} G(c)=0$?