I am proficient in standard dynamic programming techniques. In the standard textbook reference, the state variable and the control variable are separate entities. However, I have seen examples in economics, in which a single variable, let's say consumption, is both a state variable and a control variable simultaneously.
This is very strange. Can the same variable be a control variable and state variable simultaneously? Is it allowed in Bellman equation?
If a state variable $x_t$ is the control variable $u_t$, then you can set your state variable directly by your control variable since $x_t = u_t$ ($t \in {\mathbb R}_+$).
However, this problem would not a dynamic control problem any more, as there are no dynamics. It becomes a static optimization problem. For example, if the objective function is $J = \int_{0}^{t_f} f(x_t, t) dt$, then $\forall t_1,t_2$ $f(x_{t_1}, t_1)$ and $f(x_{t_2}, t_2)$ are independent. In this case, you can optimize $f(x_t, t)$ for each $t$, and the curve $x_t$ is the answer for your optimal control law.