Hi I'm a freshman in optimal control area. Recently when I was doing some research work I came to the following thing.
Consider a discrete time system $x(k+1)=Ax(k)+Bu(k)$. Starting from $x(0)=x_0$, to stabilize it optimally to the origin we have very standard result. But in my research work I found a trajectory $\tilde{x}(k+1)=A\tilde{x}(k)+B\tilde{u}(k),~k=0,1,\ldots$ like this: starting from $\tilde{x}(0)=x_0$, the trajectory $\tilde{x}(k)$ also goes to the origin and for a given positive integer $N$ and for any time instant $k$, $\tilde{x}(k+1),\ldots,\tilde{x}(k+N-1)$ forms the optimal trajectory from $\tilde{x}(k)$ to $\tilde{x}(k+N)$. In words, any part with length $N$ of this trajectory $\tilde{x}(k),~k=0,1,\ldots$ is optimal though over the infinite horizon it may not be the optimal.
My question is: is there any existing concept or definition in optimal control theory have discussed such a trajectory?