I am reading the articles:
- Optimal control for systems described by difference systems, Hubert Halkin, Advances in Control Systems, Vol 1, Academic Press, New York-London, 1964, Pages 173-196, MR183564.
and the follow up work:
- Directional convexity and the maximum principle for discrete systems, J. M. Holtzman and H. Halkin, SIAM Journal on Control, Vol 4, No 2, 1966, pages 263 - 275, MR199008, Zbl 0152.09302.
Both articles assume that the function to be maximized is of a simple form:
the $j$-th coordinate $x_j$ of the state vector $x(k)$ (for discrete time $k$).
Bertsekas' Dynamic Programming and Optimal Control, Vol I, MR3644954, Zbl 1375.90299 provides a proof for a more general cost function (Section 3.3.3: Minimum Principle for Discrete-Time Problems, page 129) of the form: $$ J(U) = g_n(x_n) + \sum_{k = 0}^{n-1}g(x_k,u_k) $$ This result requires the set of controls $U_k$ to be convex.
Reference 2) shows that convexity is too strict a notion and the weaker form of "directional convexity" allows for broader applicability.
I would like to study the general case for the discrete Pontryagin problem using a general cost function of the form: $$ J(U) = \sum_{k = 0}^{n-1} L(k,x(k),u(k)) + K(n, x_n) \label{1}\tag{$\ast\ast$} $$ where $L$ is the Lagrangian (as in Liberzon's book Calculus of Variations and Optimal Control, MR2895149, Zbl 1239.49001).
Can you share references containing proofs for the discrete Pontryagin case that include weaker convexity conditions to handle broader applications and incorporate general cost functions as in \eqref{1} above?