Assuming that the test is $H_0: \theta \leq 0$ vs. $H_1: \theta > 0$, $\sigma^2$ is unknown, and $t_{n-1,\alpha}$ is the $(1-\alpha)$th percentile of a t-distribution with n-1 degrees of freedom, I need to show that this test is of size $\alpha$, and that it can be derived as a generalized LRT.
Edit:
I think I finally found the LRT statistic (which I guess is the first step)
\begin{align} \lambda(x_1...x_n) &= \frac{sup_{\theta_0\leq0,\sigma_0}L(\theta_0,\sigma_0|x_1...x_n)}{sup_{\theta,\sigma}L(\theta,\sigma|x_1...x_n)} \\ &=\frac{sup_{\theta_0\leq0,\sigma_0}\left(\frac{1}{\sqrt{2\pi\sigma_0^2}}\right)^nexp\left(-\frac{1}{2\sigma_0^2}\sum\left(x_i-\theta_0\right)^2\right)}{sup_{\theta,\sigma}\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^nexp\left(-\frac{1}{2\sigma^2}\sum\left(x_i-\theta\right)^2\right)} \\ &=\frac{sup_{\sigma_0}\left(\frac{1}{\sqrt{2\pi\sigma_0^2}}\right)^nexp\left(-\frac{1}{2\sigma^2}\sum x_i^2\right)}{sup_{\sigma}\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^nexp\left(-\frac{1}{2\sigma^2}\sum\left(x_i-\bar x\right)^2\right)} \\ &=\frac{\left(\frac{1}{\sqrt{2\pi\hat\sigma_0^2}}\right)^nexp\left(e^{-n/2}\right)}{\left(\frac{1}{\sqrt{2\pi\hat\sigma^2}}\right)^nexp\left(e^{-n/2}\right)} \\ &=\left(\frac{\hat \sigma^2}{\hat \sigma_0^2}\right)^{n/2} \\ &=\left(\frac{\sum (x_i-\bar x)^2}{\sum x_i^2}\right)^{n/2} \end{align}
I also know that for $H_0$ to be rejected,
$$\left(\frac{\sum (x_i-\bar x)^2}{\sum x_i^2}\right)^{n/2} > t $$ $$\frac{\sum (x_i-\bar x)^2}{\sum x_i^2} < t^{2/n}$$
but then how do I continue rearranging to show that $\bar X > t_{n-1,\alpha}\sqrt{S^2/n}$ and show that the test is of size $\alpha$?
Suppose $x=(x_1,x_2,\ldots,x_n)$ is the sample and $L(\theta,\sigma^2\mid x)$ is the likelihood function given $x$.
Then likelihood ratio test statistic for testing $H_0$ vs $H_1$ is
\begin{align} \Lambda(x)&=\frac{\sup_{\theta\le 0,\sigma^2}L(\theta,\sigma^2\mid x)}{\sup\limits_{\theta,\sigma^2}L(\theta,\sigma^2\mid x)} \\&=\frac{L(\tilde\theta,\tilde\sigma^2\mid x)}{L(\hat\theta,\hat\sigma^2\mid x)}\,, \end{align}
where $(\hat\theta,\hat\sigma^2)$ is the (unrestricted) MLE of $(\theta,\sigma^2)$, and $(\tilde\theta,\tilde\sigma^2)$ is the restricted MLE when $\theta\le 0$.
Indeed, $\hat\theta=\bar x=\frac1n\sum\limits_{i=1}^n x_i$ and $\hat\sigma^2=\frac1n\sum\limits_{i=1}^n(x_i-\hat\theta)^2$.
Now argue that $$\tilde\theta=\begin{cases}\hat\theta&,\text{ if }\hat\theta\le 0 \\ 0&,\text{ if }\hat\theta>0\end{cases}\,,$$ so that $$\tilde\sigma^2=\frac1n\sum\limits_{i=1}^n(x_i-\tilde\theta)^2=\begin{cases}\hat\sigma^2&,\text{ if }\hat\theta\le 0 \\ \frac1n\sum\limits_{i=1}^n x_i^2&,\text{ if }\hat\theta>0\end{cases}$$
Simplify $\Lambda$ from here and reject $H_0$ for small values of $\Lambda$. When $\Lambda=1$, we trivially fail to reject $H_0$.