Question
Suppose I am interested in the following problem : \begin{align} H_0 : \ \theta \geq \theta_0 \ \text{versus} \ H_1 : \ \theta < \theta_0 \end{align} If $X \sim B(\theta,1)$, $-\sum_{i=1}^n ln(x_i) \sim Gamma(n,1/\theta)$ and the critical region for this hypothesis test is \emph{given} as : \begin{align} -\sum_{i=1}^n ln(x_i) &> \theta_0^{-1}G_n^{-1}(1-\alpha) \end{align} Where $G_n^{-1}$ is the quantile of order p of a Gamma (n,1) distribution.
I am asked to find explicit expressions for the probability of type 1 error and the probability of type 2 error.
Partial solution
I've written the critical function $\phi(x)$ :
\begin{align} \phi(x) = \begin{cases} 1 \ \text{if} \ -\sum_{i=1}^n ln(X_i) > \theta_0^{-1}G_n^{-1}(1-\alpha) \\ 0 \ \text{if} \ -\sum_{i=1}^n ln(X_i) < \theta_0^{-1}G_n^{-1}(1-\alpha) \\ \gamma \ \text{if} \ -\sum_{i=1}^n ln(X_i) = \theta_0^{-1}G_n^{-1}(1-\alpha) \end{cases} \end{align} With $\gamma$ a probability.
The power function of a test is then defined as : \begin{align} \pi(\theta) = E_\theta(\phi(X_n) \forall \theta \in \Theta \end{align}
The probability of committing a type I error is $\alpha$ because : \begin{align} \alpha &= E_{\theta_0}(\phi(x) \\ &= 1 \times P(-\sum_{i=1}^n ln(X_i) > \theta_0^{-1}G_n^{-1}(1-\alpha)) + \gamma \times P(-\sum_{i=1}^n ln(X_i) = \theta_0^{-1}G_n^{-1}(1-\alpha)) \end{align}
I have noted that : \begin{align} P(-\sum_{i=1}^n ln(X_i) > \theta_0^{-1}G_n^{-1}(1-\alpha)) \\ \Leftrightarrow P(\sum_{i=1}^n ln(X_i) < G_{n,\theta_0^{-1}}(1-\alpha)) = \alpha \end{align} Therefore I found an explicit expression for the probability of type I error : \begin{align} P(\sum_{i=1}^n ln(X_i) < G_{n,\theta_0^{-1}}(1-\alpha)) = \alpha \end{align} This is assuming that $P(-\sum_{i=1}^n ln(X_i) = \theta_0^{-1}G_n^{-1}(1-\alpha)) = 0$ which I think is a reasonable assumption to make if the distribution is continuous, the probability of it taking a finite value is 0.
Nevertheless, I don't know how to compute the type 2 error which is defined as : \begin{align} 1 - E_{\theta_1}(\phi(x)) \end{align}
N.B : I am asking this question for a homework therefore I am looking for hints rather than solutions.
Edit :
Finding the power function :
Actually, based on this I've found that :
\begin{align} P\Big(-\sum ln(X_i) > \theta_0^{-1}G^{-1}_n(1-\alpha \Big) &= 1 - P\Big(-\sum ln(X_i) < G_{n,\theta_0^{-1}}^{-1}(1-\alpha) \Big) \\ &= 1 -G_{n,\theta_0^{-1}} \end{align}
There is no closed form expression for the cdf of a Gamma distribution but I can plot the values of the power function for a sample size $n=20$ and a true parameter $\theta_0=2$ with the following line of R code :
plot(seq(1:100),1-pgamma(seq(1:100),20,0.5))

As known, $T=\Sigma_i \log X_i$ is CSS for $\theta$ thus, using a well known theorem on regular Exponential families, the critical region (rejection region) is
$$\mathbb{P}[T>c|\theta_0]=\alpha$$
knowing the distribution of $Y=-T$ (I prefer to indicate it as $\text{Gamma}(n;\theta)$) you get
$$\mathbb{P}[Y<k|\theta_0]=\alpha$$
Example: testing $H_0=1$ Vs $H_1>1$ with the random sample
$$\{0.6;0.7;0.4\}$$
we know that
$$(2Y|\theta_0=1)=-2\Sigma_i\log X_i\sim\chi_{(6)}^2$$
thus the critical region is
$$-2\Sigma_i\log X_i<1.64$$
As the give observation show:
$$-2\log(0.6\cdot0.7\cdot 0.4)=3.57$$
we cannot reject $H_0$
To evaluate Type II error, without fixing a specific value for $\theta_1$ you have to use the same procedure finding a power function; $\beta$ is its complement.