We have an observation $X_1$ distributed with a density function $$f_{\theta}(x)=\frac{\theta}{(x+\theta)^2}$$ for $x\geq 0$, otherwise $f_{\theta}(x)=0.$ Here we assume that $\theta>0.$ We want to test the Hypothesis $H_0:\theta = \theta_0$ vs. $H_1:\theta = \theta_1$ where $\theta_0<\theta_1.$
For this, we write the likelihood ratio: $$\Lambda(X_1,H_1,H_0)=\frac{\mathcal{L}(\theta_1;X_1)}{\mathcal{L}(\theta_0;X_1)} = \frac{\theta_1}{\theta_0}\cdot \left(\frac{x_1+\theta_0}{x_1+\theta_1}\right)^2.$$ Then the derivative of this ratio is $$\frac{2\left(x_1+\theta_0\right)\left(\theta_1-\theta_0\right)}{\left(x_1+\theta_1\right)^3}>0$$ and so the ratio is increasing as a function of $x_1.$
This means that the likelihood ratio test function is
$$ \phi_{\alpha}(x) = \begin{cases} 1 & x_1>c_{\alpha} \\ \gamma_{\alpha} & x_1 =c_{\alpha} \\ 0 & x_1 <c_{\alpha} \end{cases} $$ where we need to find $c_{\alpha}$ and $\gamma_{\alpha}.$ We know that if the significance of the test is $\alpha$ then $$\alpha = E_{\theta_{0}}[\phi_{\alpha}(X)] = P_{\theta_0}(x_1<c_{\alpha}) + \gamma_{\alpha}P_{\theta_0}(x_1=c_{\alpha}).$$
We know that $P_{\theta_0}(x_1<c_{\alpha}) = \int_{0}^{c_{\alpha}}\frac{\theta_0}{(\theta_0+x)^2}dx = \frac{c_{\alpha}}{c_{\alpha} + \theta_0}.$ We thus see that taking $\gamma_{\alpha}=0$ and $c_{\alpha} =\frac{\alpha \theta_0}{1-\alpha}$ works.
This gives us our most powerful test at level $\alpha$
$$ \phi_{\alpha}(x) = \begin{cases} 1 & x>\frac{\alpha \theta_0}{1-\alpha} \\ 0 & \text{otherwise} \end{cases}. $$
I finally compute the Type II risk which is $$P_{\theta_1}(\phi(x) = 0)= P_{\theta_1}\left(x\leq \frac{\alpha \theta_0}{1-\alpha}\right) = \frac{\alpha \theta_0}{\alpha \theta_0 + (1-\alpha)\theta_1}.$$
Are these calculations correct?