Let $X$ be a continuous random variable on $[0, 1]$ with a density function $$f(\theta |x ) = \frac{x^{\theta}}{\theta + 1}$$
Let's take a sample that consists on the only one element from the given distribution, say $x_{1}$ and test the following hypothesis:
$$H_{0}: \theta = 0 \ \ \ \ \text{vs. } \ \ \ H_{1}: \theta = 1$$
Neyman - Pearson lemma gives
$$\lambda(x) = \frac{L(\theta = 0 | x)}{L(\theta = 1 | x)} = \frac{2}{x}$$
Thus, the rejection regions consists of presicely those $x$ that satsify $\frac{2}{x} \leq a$.
In order to find $a$, we calculate $$\mathbb{P}(\lambda(x) \leq a | H_{0}) = \mathbb{P}(x \geq \frac{2}{a} | H_{0}) = \int_{\frac{2}{a}}^{1} {1_{[0, 1]} dx} = 1 - \frac{2}{a}$$
Thus $\frac{2}{a} = 1 - \alpha$ and we reject the hypothesis if $x \leq 1 - \alpha$, where $\alpha$ is the significance level.
As for me that sounds quite counter-intuitive, since for the large values of $\alpha$ we have the critical value concentrated near the end on an interval, which i strongly doubt. Am i missing something crucial here?
It seems there is nothing wrong with that. For example, let $\alpha = 0.99$. It means that we allow large probability of type 1 error (almost don't care about it) and we may end up with large critical region. Actually, corresponding critical region is $$ x\geq 1-\alpha = 0.01, $$ and almost any value of $x$ leads to rejection of $H_0$ as expected. As a result, we get a very powerful test at the expense of significance level.