In my optimal control course, we have done problems such as:
Find the optimal trajectory $x^*(t)$ for minimizing:
$J = \int_0^{tf} \left[ \frac{1}{2}\dot{x}^2(t) + x(t)\dot{x}(t) + \dot{x} \right] dt$
with different combinations of: fixed end-time/fixed end-point, free end-time, free end-point, etc etc using Calculus of Variations.
Now, my HW says:
Solve the optimal controller $u^*(t) $ for minimizing the functional:
$J = \frac{1}{2} \int_0^2 \left[ x_1^2(t) + ... + u^2(t) \right] dt$
which is subject to:
$\dot{x_1}(t) = x_2(t), \dot{x_2}(t) = -x_1(t) - x_2(t)$ and given boundary points (not free).
I'm not sure how to make the jump from what I did with Calculus of Variations to optimal controllers that are subject to functions. Can someone give me a bit of a plan of attack on how these types of problems are generally solved? thanks!