I know almost nothing about this field but I see often that optimal control problems are define by an optimization problem like the following
\begin{align} &\min&\mathcal{J}({x,u,t_0,t_f})\\ &\text{s.t.}&\dot{x}(t)=f(x(t),u(t),t) \end{align}
where $x$ represent the states, $u$ the controls and $\mathcal{J}$ is a cost function that we would like to minimize to attain our goal. However, it seems that the equations governing the dynamics could also be a PDE instead of an ODE. Also, there could be some algebraic constraints. I was wondering if there was a nice and concise way to write what it means to have an optimal control problem in whole generality.