Practical questions on why solve an optimal control problem with the HJB equations rather than the state/adjoint approach?

141 Views Asked by At

So I have a question about the HJB equations (https://en.wikipedia.org/wiki/Hamilton%E2%80%93Jacobi%E2%80%93Bellman_equation)

One could solve an optimal control problem problem using the state/adjoint (lagrnage multipler) approach where the state and adjoint variables are used to get the gradient with respect to the control variables, and then apply gradient descent (one will not descent to a maximum - perhaps a saddle point, but I believe for physics based problems saddle points are nonphysical) to find a minimizing solution. This seems to me to be pretty straightforward

To do the HJB it appears first you must solve an unconstrained optimization problem, which is reliant on knowing an unknown function that satisifes the HJB equation, to obtain the minimzing control, then solve the PDE using that control. This seems tough - so I'm curious what is the value of even thinking about HJB equation? Sure theorectically it provides optimallity conditions, but what situations would warrant this approach over the state/adjoint approach?

Thank you.