I have been reading Naidu's "Optimal Control Systems" over the summer. I am near the end of the book and on the last chapter talking about constrained optimization. In this chapter Pontryagin's maximum principle(in the book its called the minimum principle because of the sign change in the hamiltonian) is introduced and I feel like for as greatly celebrated as a principle this is, I don't really see why this is some big insight.
I feel that all I read in the end was that the optimal control occurs when the first variation in the controls of the Hamiltonian(which already has optimal state and optimal costates decided) is a minimum(positive) FOR the optimal control.
How is this really any different than the plenty of other instances in the calculus of variations where we say a minimum of a functional for some function occurs when the first variation is zero?
Is it simply because we extend this to the more general case where we mean the first variation is greater than or equal to zero as opposed to strictly being equal to zero?
TL;DR: How is Pontryagin’s maximum principle a significant result on its own, rather than being just an instance in the plenty of other similar examples in the calculus of variations?
The maximum principle, like Bellman's optimality principle, is applicable to systems with controls - not only to curves. One can say that the previous formulations of the calculus of variations are a special case, where the input is the derivative of the state.