I have a question with regard to optimal control theory and specifically to an optimal control problem that has dynamic equations/constraints not in the usual format of $\dot{x} = f\left( x, u \right)$. This time, the dynamics are described by $\dot{x} = f\left( x, u, \dot{u} \right)$.
The underlying problem is an aircraft trajectory optimization problem (assuming two-dimensional, coordinated flight: no sideslip angle or yaw), where the dynamics are given by state equations for $V, \gamma, \chi, h, x, y, m$, the aircraft velocity, flight path angle, heading angle, altitude, x position, y position, and mass, respectively. The control variables are chosen as $\tau, \mu, \alpha$, the engine throttle setting, aircraft bank angle, and angle of attack, respectively.
Now, the dynamic equation of $V$ gives the velocity time derivative as a function of a.o. the lift coefficient $C_L$. This lift coefficient, in turn, depends on the angular rates (roll and pitch in 2D) of the aircraft: $ p(\dot{\mu})$ and $q(\dot{\mu}, \dot{\alpha})$.
Summarizing: $C_L = f \left( p(\dot{\mu}), q(\dot{\gamma}, \dot{\alpha}) \right)$ and in turn, $\dot{V} = f \left( C_L \right)$, such that $\dot{V} = f\left(\dot{\alpha}, \dot{\mu} \right)$. This gives rise to a state equation in the form $\dot{x} = f\left( \dot{u}, ... \right)$.
How do we transform this problem to a "normal" $\dot{x} = f\left( x, u \right)$ optimal control dynamic constraint?
Thanks in advance.