I've been looking into control theory recently, but have been struggling to find ways to solve a particular question of mine. It seems to be formulated as a discrete optimal control problem with constraints, and either I'm looking in the wrong places, or there isn't much on it. I've put together this toy example to illustrate. Let's say dynamics are governed by the recursive relationship,
$$y(t+1)=y(t)\cdot e^{-x(t)}$$ $$y(0)=y_0$$
and you can control the value of $x$ at each timestep. For some maximum time $T \geq 1$, is it possible to find $[x_0, ..., x_{T-1}]$ that minimizes $y(T)$ s.t. $\sum_{i=0}^{T-1} x_i = c$, for some constant $c$? I've just quickly put together this example, so there may be a trivial solution here, but in general, are there methods (outside of maybe reinforcement learning) to solve these types of problems? If so, what are they?
In the case of your example system it doesn't matter what you choose for $x(t)$. Namely, by recursively substituting in the discrete dynamics one gets
\begin{align} y(T) &= y_0 \cdot e^{-x_0} \cdots e^{-x_{T-1}},\\ &= y_0 \cdot e^{-(x_0 + \cdots + x_{T-1})}, \\ &= y_0 \cdot e^{-c}. \end{align}
In the general case one can, as RobPratt also suggested, use dynamic programming, but this does require you to discretise the values each $x(t)$ can have. Such discretization does mean that one usually doesn't obtain the exact optimal solution.
One can also use more general optimization techniques, using Lagrange multipliers for the constraints. This approach can also be used to derive the discrete time equivalent of Pontryagin’s maximum principle, see slide 4 of these lecture slides.