I want to solve the next problem.
\begin{align*} &{\displaystyle \max_{{x_1},...,{x_T}}}\ \ \sum_{t=1}^T f({x_t})\\ &s.t. \ \ {y_t}=h({x_{t-1}},{y_{t-1}}),\ \ t=1,2,...,T, \\ &\ \ \ \ \ \ \ \ \ {x_{t}} \in X({y_t}),\ \ t=1,2,...,T, \end{align*} where $x_t \in R^n$ is a decision vector (continuous), $y_t \in R^m$ is a state vector (continuous), $T$ is a finite constant, $f$ is strong concave. Uncertainty is not contained in the above problem.
Is there any way to solve it as the Bellman equation? Since the state variables are continuous vectors, backward induction cannot be used.
--
Since the dimension of the decision variables in the above problem is $T × n$, it becomes a large-scale problem when $T$ is large.
The interior point method, for example, is time-consuming because of its large scale.
Since the objective function $f$ is the same for all periods $t=1,2,...,T$, we want to use the structure.
I don't see why you can't apply backward induction.
Let $t = T$ and solve for $x_T^*(y_T) = \arg\max_{x_T \in X(y_T)} f(x_T)$, then for $t=T-1$ you solve for $x_{T-1}^*(y_{T-1}) = \arg\max_{x_{T-1} \in X(y_{T-1})}\left[ f(x_{T-1}) + f(x_T^*(h(x_{T-1}, y_{T-1})))\right]$ and so forth.
In general, let \begin{align} &V(y_0) = \max_{x_1,\ldots,x_T}\sum_{t=1}^T f(x_t)\\ \text{s.t.} \quad & y_t = h(x_{t-1}, y_{t-1}) \\ &x_t \in X(y_t) \end{align} then the Bellman-equation reads \begin{align} V(y) = \max_{x \in X(y)}\left[f(x) + V(h(x,y))\right]. \end{align} Then solve for $V$.