I have a nonlinear minimization model with a single variable as follows:
$$\min_{x \in [0,h]} \ z\left(x\right) := a \, x^b + c \, \left(h-x\right)^d$$
Besides, for the parameters, we have $0 \le b, d \le 1$ and $a, c > 0$. The first derivative is as
$$\frac{dz}{dx}=abx^{b-1}-cd{\left(h-x\right)}^{d-1}$$
and the second derivative is as
$$\frac{d^2z}{dx^2}=ab\left(b-1\right)x^{b-2}+cd\left(d-1\right){\left(h-x\right)}^{d-2}$$
Perhaps we should investigate $\frac{d^2z}{dx^2}$. If it is negative (because $0 \le b, d \le 1$), there is no need to solve $\frac{dz}{dx}=0$, and we can conclude that the optimal answer is on one of the extreme values of $x$. Right?
Yes, this is correct. The minimum of a function $z \in C^2([0,h];\mathbb{R})$ which is strictly concave (i.e. satisfies $z'' < 0$) is attained at one of the endpoints $x = 0$ or $x = h$. This is your case when $0 \leq b, d \leq 1$ (and at least one of them is not $0$ or $1$). When both $b$ and $d$ are $0$ or $1$, your function is affine so the conclusion stays the same (although the function is not strictly convex anymore).
This is the "mirror case" of the classical question of whether the maximum of a convex function is attained at its endpoints, which has already been answered here.