I have a very big doubt regarding a test not depending on $\theta_1$. Suppose I have:
$$(X_i)_{i=1}^{n}, X_i \stackrel{iid}{\sim} N(\theta, 1), \hbox{ that is } X_i \sim g(x_i, \theta) = (2\pi)^{-1/2} \exp[\frac{-1}{2}(x_i - \theta)^2]$$
The joint distribution is given by $f(x;\theta) = (2\pi)^{-n/2} \exp[\frac{-1}{2}\sum_{i = 1}^{n}(x_i - \theta)^2] $
Suppose I have the following hypothesis test with $0<\theta_1$, where $\theta_1$ is fixed but arbitrary (remark: always positive)
$$H_0: \theta = 0\quad \hbox{vs}\quad H_1: \theta = \theta_1$$
By Neyman–Pearson theorem, there is some $k>0$ such that
$$\phi(x) = \left\{ \begin{array}{lr} 1 & : f(x;0)k < f(x;\theta_1) \\ 0 & : f(x;0)k > f(x;\theta_1) \end{array} \right.$$
is a UMP test in the class of tests with size $\alpha = E_0[\phi(X)]$.
$$\hbox{I want to understand why this test does not depend on }\theta_1?$$
$remark:$ $\theta_1$ is positive and fixed but arbitrary. That is, I want to understand if I want to change the parameter $\theta_1$, I will still get the same test exactly. Notice that:
$$k < \frac{f(x;\theta_1)}{f(x;0)} \Longleftrightarrow \ln(k) < \frac{n}{2}(2 \theta_1 \bar{x} - \theta_1^{2}) $$
In addition, we can see that
\begin{equation} \label{eq1} \begin{split} k < \frac{f(x;\theta_1)}{f(x;0)} & \Longleftrightarrow \frac{\ln(k)}{n} < \frac{1}{2}(2 \theta_1 \bar{x} - \theta_1^{2}) = \theta_1\bar{x} - \frac{\theta_1^{2}}{2} \\ & \Longleftrightarrow \frac{\ln(k)}{n} + \frac{\theta_1^{2}}{2} < \theta_1\bar{x}\\ & \stackrel{\theta_1 > 0}{\Longleftrightarrow } \frac{\ln(k)}{n\theta_1} + \frac{\theta_1}{2} < \bar{x}\\ &\Longleftrightarrow \sqrt{n}\left[\frac{\ln(k)}{n\theta_1} + \frac{\theta_1}{2}\right] < \sqrt{n}\bar{x}\\ & \Longleftrightarrow \tilde{k}(\theta_1) < \sqrt{n}\bar{x} \end{split} \end{equation}
Then $\phi(x) = \left\{ \begin{array}{lr} 1 & : \tilde{k}(\theta_1) < \sqrt{n}\bar{x} \\ 0 & : \tilde{k}(\theta_1) > \sqrt{n}\bar{x} \end{array} \right.$, with $\alpha = E_0[\phi(X)]$. Notice that $Z = \sqrt{n} \bar{X} \sim N(0,1)$. So, to determine the test well, we have to determine the constant $\tilde{k}(\theta_1)$. We will determine the constant $\tilde{k}(\theta_1)$ forcing the test to be of size $\alpha$. Under the null hypothesis, we have:
\begin{equation} \begin{split} \alpha = E_{0}[\phi(X)] & \Longleftrightarrow P_{0}[\tilde{k}(\theta_1) < Z] = \alpha \\ & \Longleftrightarrow P_{0}[Z \leq \tilde{k}(\theta_1) ] = 1 -\alpha \\ & \Longleftrightarrow F_Z (\tilde{k}(\theta_1)| \theta = 0) = 1 -\alpha\\ & \Longleftrightarrow \tilde{k}(\theta_1) = F_{Z}^{-1} ( 1- \alpha| \theta = 0) \end{split} \end{equation}
And here is my problem, because the $\tilde{k}(\theta_1)$ depends on the parameter $\theta_1$. For example, suppose $\alpha = 0.01$, we have
$$\tilde{k}(\theta_1) = \sqrt{n}\left[\frac{\ln(k)}{n\theta_1} + \frac{\theta_1}{2}\right] = 2.33$$
In other words, if a take some other $\theta_1^{'}>0$, we have other $\tilde{k}(\theta_1^{'})$. And consequently, I will have another test $\phi^{'}(x) = \left\{ \begin{array}{lr} 1 & : \tilde{k}(\theta_1^{'}) < \sqrt{n}\bar{x} \\ 0 & : \tilde{k}(\theta_1^{'}) > \sqrt{n}\bar{x} \end{array} \right.$
For this purpose, can I adjust the $\tilde{k}(\theta_1)$ by resizing the sample $n$? that is, if I want the $\tilde{k}(\theta_1)$ not to depend on the parameter $\theta_1$, I should just change $n$? but this does not seem to make much sense. Why am I asking this? because in other problems, I really need to vary the parameter $\theta_1 > 0$ and ensure that the test does not depend on $\theta_1$. For example: $$H_0: \theta \leq 0\quad \hbox{vs}\quad H_1: \theta > 0.$$
Your test a priori defined as $$ \phi(x) = \left\{ \begin{array}{lr} 1 & : f(x;0)k < f(x;\theta_1) \\ 0 & : f(x;0)k > f(x;\theta_1) \end{array} \right.$$ has $k$ depending on $\theta_1$ but its rejection criterion is equivalent to $$\tilde{k}(\theta_1) := \sqrt{n}\left[\frac{\ln(k)}{n\theta_1} + \frac{\theta_1}{2}\right] >\sqrt{n}\overline{x}$$ where $k$ is chosen so that the LHS is $2.33$ for $\alpha=0.01$. The key point to notice is that the for any other $\theta_1>0$, you may choose $k$ so that $\tilde{k}(\theta_1)=2.33$. This is true since the LHS above is monotone/invertible in $k$ for any fixed $\theta_1>0$. Then, the statement you want to make is that "the test with rejection region $\{2.33<\sqrt{n}\overline{x}\}$ is UMP level $\alpha=0.01$ for testing $H_0:\theta\leq 0$ versus $H_1:\theta>0$". This is true precisely because $f(x;\theta_1)/f(x;0)$ is monotone in $\overline{x}$. See the definition of monotone likelihood ratio and the Karlin-Rubin theorem for a general formulation of this.