Consider the following classical statistical test setup:
One assumes a coin to be unfair in the sense that heads, say, occurs more frequently than tails. Thus we set $H_0: p\leq\frac12$ as null hypothesis and $H_1:p>\frac12$ as alternative where $p$ is the probability for heads.
Also let X count the occurence of heads when tossing the coin $n$ times. Given $n$ and a significance level $\alpha$ we get the one-tail condition \begin{equation} (1)\quad P(X\geq k)\leq\alpha \end{equation} where $P$ has a $(n,p)$-binomial distribution with $p\leq\frac12$ (thus yielding the probability for rejecting $H_0$ when it's actually true).
To solve $(1)$ for $k$ it would now be common (school book) practice to set $p=\frac12$ and solve $(1)$ by inversion. But this isn't correct, as we just know $p\leq\frac12$.
So wouldn't it be better to rather use a distribution for "$k$ wins out of $n$ with a probability of success $\leq\frac12$" and which would that appropriate distribution be?
I want to be more precise: In a more general context the maximum $\alpha$ error could be defined as \begin{equation} \alpha_{max}:=\max_{\theta\in\Theta_0}\{P_\theta(T(X_1,\dotsc,T_n)\in K)\} \end{equation} where $T$ is some kind of test statistic, in our case counting the number of heads in a sample $X_1,\dotsc,X_n$; $\Theta$ is the parameter space in question (our paramter is $p\sim\theta$), $\Theta_0$ the subspace corresponding to the null hypothesis, i.e. \begin{equation} H_0: \theta\in\Theta_0,\quad H_1:\theta\in\Theta\setminus\Theta_0; \end{equation} and finally $K$ is the region of rejection of $H_0$, i.e. \begin{equation} H_0\text{ is rejected iff }T(X_1,\dotsc,T_n)\in K. \end{equation}
So in particular we have $\Theta=[0,1], \Theta_0=[0,\frac12]$, yielding \begin{equation} \alpha_{max}=\max_{p\leq\frac12}\sum_{i=k}^n B_{n,p}(X=i), \end{equation}
which should now be $\leq$ a given significance level.
[Definitions from http://www.wiwi.uni-muenster.de/05/download/studium/advancedstatistics/ss09/kapitel_6.pdf - couldn't find equivalent in English]
Both Null hypothesis are possible. The crucial point is the definition of the alternative hypothesis, $H_1$. This definition is unique as you can see at the table below. $$\begin{array}{|c|c|c|} \hline &H_0 &H_1 \\ \hline \texttt{two-tailed} & p=p_0 &p\neq p_0 \\ \hline \texttt{right-tailed} & p=p_0 \ \ \text{or } \ \ p\leq p_0 &p>p_0 \\ \hline \texttt{left-tailed} & p=p_0 \ \ \text{or } \ \ p\geq p_0 &p<p_0 \\ \hline \end{array}$$
For the right-tailed case you evaluate the the smallest value of $c$, where
$$\sum_{i=c}^n B(i| p_0,n)\leq \alpha$$
Then the critical range is $\{c, c+1, \ldots, n \}$.