Can optimal control be used to obtain a desired result?

47 Views Asked by At

I am pretty new to the theory and application of optimal control. However, I am curious as it is not mentioned in the textbook that I use. Is it possible to optimize $u(t)$ such that we can obtain a desired (forced) result?

  1. For example, if I have a system $x'(t) = g(t,x,u)$ and I want to vary $u(t)$ to force $x(t)$ to move along a path $\bar{x}(t)$ over $t\in[t_0,t_1]$, how should I go about doing so?

  2. Furthermore, if the problem is discrete $\bar{x}(t_n)$, where $t_n$ is the time point, how should I go about doing so?

Any references, especially with application to biology, would be appreciated!

1

There are 1 best solutions below

0
On

For 1, consider an objective function like $$ \int_{t_0}^{t_1}(x(t)-\bar{x}(t))^2 dt, $$ so you are minimizing the squared deviation from the preferred path, subject to $x'(t) = g(t,x,u)$, presumably given $x_0$. Then your Hamiltonian is $$ H(x,u,\lambda,t) = (x(t)-\bar{x}(t))^2 - \lambda(t) g(t,x,u). $$ At this level of generality, Pontryagin's necessary conditions are not sufficient, and you would have to pick a law of motion for the state variable or sufficient assumptions that you could check the second-order conditions for an optimal control problem (Arrow or Mangasarian concavity of the Hamiltonian).

  1. In discrete time, you would use Dynamic Programming instead of optimal control. You generate a Bellman equation, $$ V_t(x_t) = \min_{u_t} (x(t)-\bar{x}(t))^2 + V_{t+1}(x_{t+1}) $$ subject to $x_{t+1} = g(x_t, u_t, t)$. You start at $t_1$ and work backward, using the values $V_{t+1}(x_{t+1})$ that you have already computed and minimization of the $t$-period problem to determine $V_t(x_t)$, until you get back to date zero. There are dozens if not hundreds of introductory books on this topic.

You might need to introduce a cost function for $u(t)$ to ensure that the solution isn't trivial, like "set $x(t)=\bar{x}(t)$". In