Pontryagin's Principle is a "necessary" condition?

194 Views Asked by At

I have read several times that Pontryagin's Maximum (or Minimum) Principle provides a "necessary" condition for a control, $u(t)$, to be an optimal control of a system of ODEs, $\dot x = f(t,x,u)$. I always understood this to mean that the optimal control might be one of many functions satisfying Pontryagin's conditions.

My question is: can we say anything else about controls that satisfy these conditions? Fleming and Rishel refer to these controls as "extremals," which suggests that these are local maxima/minima, but not necessarily global ones. That's definitely a stronger statement than what I said above. Am I missing something?

For a little more context, F & R show that the optimal control to the classic lunar lander problem is for the lander to first free-fall, then blast its thrusters at maximum force. They do this by first showing that such a control satisfies Pontryagin's conditions, and then showing that any other control would contradict those conditions. Based on my reading, this shows that there is exactly one candidate for optimal control, but doesn't necessarily prove that an optimum exists.

Apologies if this question is basic or redundant — I have not found another StackExchange thread that answers it explicitly.

1

There are 1 best solutions below

0
On BEST ANSWER

Nice question(s)! Here are some pointers.

  1. About the term "extremals". This is merely a definition. You can see Pontryagin's maximum principle (PMP) as a far-reaching generalization of the Euler-Lagrange equation, which is itself a generalization of the usual necessary condition $f'(x^*) = 0$ when $x^* \in \mathbb{R}$ is a local extremum of $f$. In the context of calculus of variations and optimal control, it is classical to call the solutions to the Euler-Lagrange equation (or to the PMP), "extremals". This is just to have a way to refer to them. They need not be (even local) minimizers or maximizers. You have a nice example in Liberzon's book [1] (Example 2.3). (This is in the context of calculus of variations, but you can transfer it to optimal control just by thinking to the naive system $\dot{x} = u$).

Example 2.3. Consider minimizing $\int_0^1 x(t) (\dot{x}(t))^2 \mathrm{d} t$ with $x(0) = x(1) = 0$. Then the trajectory $\bar{x}(t) \equiv 0$ is an extremal (i.e. solves the associated Euler-Lagrange equation). It is even the only one. But one easily checks that it is neither a local maximum or minimum of the energy functional.

  1. About sufficiency. Yes, you are right for the second part. Even if you manage to prove that only a single control/trajectory satisfies the PMP, you are not done if you wan't to be completely rigorous. In fact, the example just above shows something in this direction. It can happen that you have exactly one candidate (so extremal in the usual vocabulary), but that this candidate is not a minimizer/maximizer (even locally). You need to look into sufficient conditions for the existence of an optimal control. You will find some pointers in this question. You can also look at sections 4.5 and 4.6 in Liberzon's book. There is quite a large zoology of such sufficient conditions for the existence of an optimal control (once you know that it exists, and you have a unique candidate, you are done).

[1] : Liberzon, Daniel. Calculus of variations and optimal control theory: a concise introduction. Princeton university press, 2011.