I am studying the Donald Kirk's book Introduction to Dynamic Programming.
Suppose some integral $\int g dt$ that must be minimised. Then you are given some constraints. Hamilton equation is $H=g+p^T f$ where $f$ is the system equation.
Necessary conditions for Hamilton here.
Helper questions
Does a question make any sense if the system equation is not specified explicitly?
What is the system equation $f$? It is easy to solve a question if the system equation is specified explicitly but otherwise I have hard time in figuring it out. What is it really?
Typically we want to optimize the following functional
$$ \max_{u} \int_0^1 g(x(t),t,u(t)) dt $$ subject to $$ \dot x = f(x,t,u), x(0)=x_0 $$ (bounds $0,1$ are taken for the sake of simplicity. Since the ODE cam be considered as a constraint between $\dot x$ and $x$ we can introduce a Lagrangian $$ L(x,t,\dot x,p,u)=g(x,t,u) + p (f(x,t,u) - \dot x)=H(x,p,t,u)-p\dot x ~~~~ (1) $$ Now the idea is to look unconstrained optimum of $$ \int_0^1 L(x,t,\dot x,p,u) dt $$ over $x(t),\dot x,$ $p(t)$ and $u$. We know that when we vary functional we should come to Lagrangian equation $$ \frac {\partial L} {\partial z}- \frac {d}{dt}\frac {\partial L} {\partial \dot z}=0 $$ where $z$, can be $x,p,u$. Applying this Lagrange equation to (1) we can obtain: $$ \dot x =f=\frac {\partial H}{\partial p},~ x(0)=x_0\\ \dot p=-\frac {\partial H}{\partial x}, ~ p(1)=0 \\ \frac {\partial H}{\partial u}=0 $$
Notice that this is not an intial value problem since $p$ is defined on the end of interval. This is why solving of such system is not an easy task. A Boundary condition for $p$ is obtained from variation conditions. They are different for the different type of functional.