In hypothesis tests where the alternate hypothesis is of type 'greater', the false negative rate ($\beta$) as a function of the false positive rate ($\alpha$) is given by:
$$\beta(\alpha) = F_X S_Y^{-1}(\alpha)$$
for some distributions $X$ and $Y$, where $F_X$ is the CDF of $X$ while $S_Y$ is the survival function of $Y$ (i.e $S_Y = 1-F_Y$). Taking the derivative once shows that this is a decreasing function:
$$\beta'(\alpha) = f_Y(S_X^{-1}(\alpha))\frac{\partial S_{X}^{-1}(\alpha)}{\partial \alpha}$$
Here, $f_Y$ is the PDF of $Y$. It's clear to see that part is positive and the second part is negative as the inverse survival function is monotonically decreasing. Now, I have a strong hunch (given basis of hypothesis testing and verified with many simulations) that when $Y$ is just $X$ shifted by some amount to the right, $\beta(\alpha)$ is convex, when it is shifted to the left, $\beta$ is concave and when $Y\overset{D}{=}X$ (i.e when $X$ and $Y$ are identically distributed), it is a straight line. The last result is easy to see by noting when $Y\overset{D}{=}X$, i.e when $F_Y=F_X$, $S_Y=S_X=1-F_X$, we have:
$$ \beta(\alpha) = 1-S_YS_Y^{-1}(\alpha) = 1-\alpha. $$
But I can't prove the other two.
My attempt:
Differentiating the expression for $\beta(\alpha)$ twice, we get:
$$\frac{\partial^2\beta}{\partial \alpha^2} = f_Y(S_X^{-1}(\alpha))\frac{\partial^2S_{X}^{-1}(\alpha)}{\partial \alpha^2}+\frac{f_Y(S_X^{-1}(\alpha))}{\partial \alpha}\frac{S_{X}^{-1}(\alpha)}{\partial \alpha}$$
This doesn't seem to lead anywhere with even that fact that it should be $0$ when $Y\overset{D}{=}X$ no longer evident.
It's all about real analysis. We will use the following elementary lemma from Wikipedia.
Now, in your problem, $\beta(\alpha) := F_X(S_Y^{-1}(\alpha))$ for all $\alpha \in [0, 1]$. Let's assume
Differentiating $\beta$, we have
$$ \beta'(\alpha) = f_X(S_Y^{-1}(\alpha))(S^{-1}_Y)'(\alpha) = \frac{f_X(S_Y^{-1}(\alpha))}{S_Y'(S_Y^{-1}(\alpha))}=-\frac{f_X(S_Y^{-1}(\alpha))}{f_Y(S_Y^{-1}(\alpha))}, $$ where the first equality is by the chain rule, the second is by the above lemma, and the third is because $S_Y=1-F_Y$ and so $S_Y' = -F_Y'=-f_Y$.
Differentiating again, we get
$$ \begin{split} \beta''(\alpha)&=-\frac{f_Y(S_Y^{-1}(\alpha))\left(\frac{f_X'(S_Y^{-1}(\alpha))}{f_Y(S_Y^{-1}(\alpha))}\right)+f_X(S_Y^{-1}(\alpha))\left(\frac{f_Y'(S_Y^{-1}(\alpha))}{f_Y(S_Y^{-1}(\alpha))}\right)}{(f_Y(S_Y^{-1}(\alpha)))^2} \\ &=\frac{f_X(S_Y^{-1}(\alpha))\frac{f_Y'(S_Y^{-1}(\alpha))}{f_Y(S_Y^{-1}(\alpha))}-f_X'(S_Y^{-1}(\alpha))}{(f_Y(S_Y^{-1}(\alpha)))^2}, \end{split} \tag{*} $$ where we've used the chain rule and the lemma above once again. Thus $$ \beta''(\alpha) \gtreqqless 0 \text{ iff } \frac{f_X'(S_Y^{-1}(\alpha))}{f_X(S_Y^{-1}(\alpha))} \lesseqqgtr \frac{f_Y'(S_Y^{-1}(\alpha))}{f_Y(S_Y^{-1}(\alpha))}, \tag{1} $$ from which you can get necessary and sufficient conditions for convexity / concavity / linearity of the $\beta$ function.
Case when $Y\overset{D}{=}X+c$
In particular, if $Y \overset{D}{=} X + c$ for some fixed $c \in \mathbb R$, then $S_Y^{-1}(\alpha) = c + S_X^{-1}(\alpha)$, $f_Y^{(k)}(x) = f_X^{(k)}(x-c)$ for all $x \in \mathbb R$ and $k=0,1$. Thus
and by applying (1), we get
$$ \beta''(\alpha) \gtreqqless 0 \text{ iff } \frac{f_X'(c+S_X^{-1}(\alpha))}{f_X(c+S_X^{-1}(\alpha))} \lesseqqgtr \frac{f_X'(S_X^{-1}(\alpha))}{f_X(S_X^{-1}(\alpha))}, \tag{2} $$ We can then obtain necessary and sufficient condition for convexity / concavity / linearity of $\beta$, simply by exploiting the structure of $f_X$. For example, we deduce that
We can even extend the above lemma to the general case where the distribution of $X$ is log-concave, i.e $f_X(x) = e^{-V(x)}$ for some twise continuous differentiable $V:\mathbb R \rightarrow \mathbb R$ such that $V''(x) \ge 0$ for all $x\in \mathbb R$. The Gaussian example $X \sim \mathcal N(\mu,\sigma^2)$ is an instance of this setting; simply take $V(x) = \frac{1}{2\sigma^2}(x-\mu)^2 + \frac{1}{2}\log(2\sigma^2)$.
Proof. For arbitrary $ \alpha \in [0, 1]$, et $x = x(\alpha) := S_X^{-1}(\alpha)$. One computes $$ \frac{f_X'(S_X^{-1}(\alpha))}{f_X(S_X^{-1}(\alpha))} = \frac{f_X'(x)}{f_X(x)}=\frac{-V'(x)e^{-V(x)}}{e^{-V(x)}}=-V'(x). $$ Thus, by (2), $\beta(\alpha)$ is convex in $\alpha$ iff $[-V'(c+x) \le -V'(x)\;\forall x]$ iff $[V'(c+x) \ge V'(x)\;\forall x]$ iff $c \ge 0$ (since $V'$ is non-decreasing because $V''$ is a nonnegative function by hypothesis). We proceed similarly in the other two cases (concavity and linearity). $\quad\quad\quad\Box$