I have been reading some about iterative LQR and I don't really understand what's going on.
Conceptually I understand the idea of linearizing the system around each of its discretization points and expanding the cost function quadratically around the same point.
What I don't understand is the following:
Suppose you have your initial control sequence $u_1, u_2, ...,u_t,....u_n$. By solving for a new control sequence where $\hat u_i = u_i + \delta_i$ you change the trajectory of the system when you forward propagate it. This changed trajectory(as far as I know) isn't taken into account by the LQR solutions further down. So by the time your system has forward propagated through multiple time steps using the new control input, your state at time $\hat x_t$ may be substantially different than your state $x_t$ which you used to compute $\hat u_t$.
Am I understanding this correctly or is something else going on?
After some additional reading I think I understand. I didn't understand initially that the linearization isn't just for the current iteration but takes into account the forward dynamics and cost.