Lagrange multipliers in optimal control - regularity of constraints?

234 Views Asked by At

Assume we want to minimize the loss, \begin{align} J(u) &= \int_{0}^{T}l(x(t),u(t))dt \\ \text{s.t.}\quad &\dot{x}(t) = f(x(t),u(t)), \quad x(0)=x_0. \end{align} For necessary conditions many textbooks (e.g. Calculus of Variations and Optimal Control Theory - by Daniel Liberzon) then introduce the Lagrange multiplier $\lambda(t)$ and calculate the variation of, \begin{equation}\mathcal{L}(u) = \int_{0}^{T}l(x(t),u(t))dt - \int_{0}^{T}\lambda(t)^\top\left[\dot{x}(t) - f(x(t),u(t))\right]dt.\star\label{lagrange}\end{equation} I understand that we may view the differential equation as a constraint $g(u)=0$ with, $$g:U\to Y, u\mapsto \dot{x}(\cdot,u)-f(x(\cdot,u),u(\cdot))$$ where U and Y are function spaces and $x(\cdot,u)$ indicates the implicit dependency on $u$. We may then use Lagrange multipliers as described here to get to $\star$. What I am confused about is:

  • Wouldn't we need that $u$ is regular point i.e. $Dg(u)$ is surjective to apply Lagrange multipliers? But $g$ is by construction a zero map and therefore this fails, doesn't it? And also introducing a constraint that is always satisfied sounds a bit silly.
  • I have seen that we can check regularity when we view it as an optimization problem in $U\times X$ (where $x\in X$, see Optimization by vector space methods) and in this case the constraints seems intuitively reasonable too. But is there a justification for Lagrange multipliers for the problem in $U$?