I want to solve the following optimization problem where $x$ is scalar variable.
$$ \min_x \dfrac12ax^2 + bx \\ subject\ to:\ l\le x \le u $$
$ a > 0 $ therefore, this is a convex optimization problem. Without the boundary constraints the problem has a trivial solution $ x = -\frac{b}{a} $. But how do the constraints affect the solutions at the boundaries and how to get a rigorous analytical solution?
Context: I am trying to understand the derivations in this paper. The original problem, of which I have given a simplified version, appears in eqns (6-9). The authors talk about the projected gradient and give different values of the projected gradient at the boundaries. I did not understand this part.
We can show that $f(x)$ attains its minimum on the boundary if $-b/a$ is not in the interval via contradiction. Suppose the minimizer is inside, $l \leq x_\ast \leq u$. The gradient at $x_\ast$ is nonzero, which means either $x_\ast + t$ or $x_\ast - t$ for small $t$ can attain a strictly smaller value by considering linear approximation (in this case, if derivative is positive, then consider $x_\ast - t$ for small $t$).
In general, can reason similarly for higher dimensions. If the gradient does not vanish at a point $x$, and a feasible direction $p$ (i.e. $x+tp$ also feasible for $t>0$) satisfies $\nabla f^T p < 0$, then you were not at a local optimum, as $x+tp$ can decrease the objective further.
Ah! One more point: If you have a local optimum, since the problem is convex, it is necessarily the global optimum.