How can I prove that both this cost functions are the same?

26 Views Asked by At

Let us starting by assuming that a trajectory $x(k, x_0)$ exists, where k in an instant/timestep and $x_0$ is the initial point of said trajectory. This trajectory is defined by some mathematical model in the form of

$$ x(k+1) = f(x(k), u) $$

where $u$ is the input of that model.

Also, there also exists $x_\infty(k, x_0)$ which is, again, the same trajectory but that goes until infinity. ($x$ is a finite trajectory).

We now want to get the control input that returns the best possible trajectory, i.e., we want to minimize some cost function that as u(k) as free variables.

For that we have the following cost function:

$$ J = \sum_{k=0}^{N-1} \ell (x(k, x_0)) + min \Bigg\{\sum_{k=0}^\infty \ell (x_\infty (k, x(N, x_0))) \Bigg\}$$

where $ \ell $ is a function that returns the distance to some objective.

The system dynamics are defined on the optimization problem constraints, but that's not relevant for this case.

Let us also assume that $x_\infty$ is always a known trajectory regardless of its starting point.

Now keeping in mind that the "min" in the right hand side will always return the lowest cost of the infinite trajectory, i.e., it will return the cost of the optimal trajectory, and we use $u(k), \,k=0, ..., N-1$ to control the first term (the finite trajectory), I want to prove that that cost function $J$ is the same as having:

$$ J_1 = \sum_{k=0}^{\infty} \ell (x(k, x_0))$$

Basically, and putting the problem into words, I want to show that minimizing the cost of a finite trajectory and summing the cost of the infinite one - starting at the end of the finite - is the same as minimizing the infinite trajectory starting at $x_0$.

For some extra clarification, let me know.

I'm having trouble to do this since my math skill are not that great for the task I need to do.