How to solve an optimal control problem where one variable is "reset" at time $T$?

79 Views Asked by At

Standard problem.

Say we have an optimal control problem with the following state variables $$ \begin {align} &\min_{u_t} \int_0^\infty C(t,x_t)\,dt, \text { subject to } \\ &\dot x_t=f(x_t,y_t,u_t)\\ &\dot y_t = g(u_t)\\ &0\leq u_t\leq F(t)\\ &x_0=a, y_0=b \text { as given.} \end{align} $$

For some given functions $f(\cdot ), g(\cdot ), F(\cdot ), C(\cdot)$

This is just a standard optimal control problem.

Adjusted problem.

However! If we add the additional assumption, namely an additional constraint, that at some given time $T$, $x_T$ is reset to constant $a$, without requiring that $x_t$ be continuous at $T$. In other words we discontinuously force $x_T$ to be equal to a certain constant, without constraining the solution in any other way (i.e. at $T-\epsilon$ for infinitessimal $\epsilon$, $x$ may have a value far greater or smaller than $a$, so that $x_T$ will likely be discontinuous at $T$). However, we do not do the same for $y_t$.

The difficulty is, that since we do not put the same constraint on $y$ ($y$ remains continuous at time $T$, and is not "reset" as $x$ is), we cannot simply "split" the problem in two parts, since the ODE of $y$ will cause there to be a dependency between the two parts. (meaning, the way we optimize the first part, will change the value of $y_T$, and thereby influence the second part).

Then how do we solve the minimization problem with this added assumption?

I am not sure how to approach this problem.

1

There are 1 best solutions below

0
On

OK, this is not an answer, but rather a hint at a possible direction.

Consider two problems: $$ \begin {align} &J_1=\min_{u_t} \int_0^T C(t,x_t)\,dt,&&J_2=\min_{u_t} \int_T^\infty C(t,x_t)\,dt, \\ &\dots&&\dots\\ &x(0)=a, y(0)=b,\,x(T)-free,\, y(T)=\upsilon&&x(T)=a, y(T)=\upsilon,\,x(\infty),\,y(\infty)-free. \end{align} $$ These problems are parametrized by $\upsilon$. If you manage to solve them for any (meaningful) value of $\upsilon$ and get $J_1(\upsilon)$ and $J_2(\upsilon)$, it remains to find $\upsilon^*=min_\upsilon(J_1(\upsilon)+J_2(\upsilon))$ and the rest follows.

Well, in most cases this is not possible. However, it may happen that you can solve one of these two problems. Suppose that you can solve the second one and get $J_2(\upsilon)$. Then you can reformulate the first one as follows: $$ \begin {align} &J_1=\min_{u_t} \int_0^T C(t,x_t)\,dt + J_2(y(T)),\\ &\dots\\ &x(0)=a, y(0)=b,\,x(T),y(T)-free. \end{align} $$ That is to say, your second optimization problem would enter the first as a terminal cost.

If all this does not work you may google for "optimal control with state jumps". However, most results in this field are of theoretical rather than practical nature and I'm not sure if you will find something really useful...