If I wish to minimize the cost function $$ J(x(\cdot),u(\cdot)) = \int_0^TL(x,u)dt $$
with dynamics constraint $\dot{x}(t) = f(x(t),u(t))$ $\forall t$, many textbooks state that this constrained optimization problem can be reformulated as an unconstrained optimization problem:
$$ \tilde{J}(x(\cdot),u(\cdot),\lambda(\cdot)) = J(x(\cdot),u(\cdot)) + \int_0^T\lambda^T(t)(\dot{x}(t) - f(x(t),u(t)))dt $$
I can't quite see how this captures the constraint... Isn't it possible that the integral could be zero without the constraint actually being satisfied?
(for example, if $(\dot{x}(t) - f(x(t),u(t)) = 1$ from $t = [0,0.5T]$ and $-1$ from $t = [0.5T,T]$)
How does this form imply that the constraint is satisfied if we minimize this unconstrained cost $\tilde{J}$ over $\lambda$?
__
Edit: I know that in a standard Lagrangian optimization problem, if we take the derivative w.r.t. $\lambda$, that we recover the original equality constraint. In this case since $\lambda(t)$ is a function, I need to do something related to functional derivatives rather than a normal derivative. However, I am not sure where to begin with this.