Optimal Control: Can I express a Macroeconomic model as a state space control system?

93 Views Asked by At

Both Recursive Macroeconomic models and Control theory problems use ideas from the calculus of variations to estimate unknown functions. In the case of a Macroeconomic model--as shown below--that unknown function is the output of an optimization problem. In the case of state space control--for a linear system--the unknown function is also the output of an optimization problem. Hence I was wondering if there is a way to express a Macroeconomic model problem in a state space form and solve the problem using Control theory numerical solvers?

Recursive Macroeconomic models rely on the simple formulation below. $u(c_{t+j})$ is the utility function of an individual given a consumption level of $c_{t+j}$ indexed by time $t+j$. The equation is given below. There is often a budget constraint in there as well

$$ \max_{c_{t+j}, i_{t+j}} \sum_{j=0}^\infty \beta^j u(c_{t+j})\\ $$ The equation is subject to the following budget constraint, where $F(k)$ is the production function for the economy--meaning the total output of the economy.

$$ \text{subject to:} \quad c_{t+j} + i_{t+j} = F(k_{t+j}) $$

further the production function is subject to the additional constraint: $$ k_{t+j+1} = (1-\delta)k_{t+j} + i_{t+j} $$

The parameter $\delta$ is the rate of depreciation of equipment, $i$ is the amount of investment in the economy--to buy more capital, and $k$ is the amount of capital equipment in the economy to produce goods. Now there is more sophistication to this system, but this is the essential flavor of the problem, as per Greenwood and Marto (manuscript). The usual approach is to add more constraint equations that model production in the economy, and taxation, etc.

These problems may be solved analytically using the Euler equation (Economist's term for the Euler-Lagrange equation), but numerically they are solved with dynamic programming methods like value iteration, policy iteration, etc.

In contrast, the usual state space optimal control problems are formulated as below:

$$ \dot{x} = Ax(t) + Bv(t) $$ $$ \dot{y} = Cx(t) + Dv(t) $$

Note that I used $v$ here instead of the more common $u$, to disambiguate between the utility function and control function. These control problems are also solved with dynamic programming methods, just as the economic problems above.

Hence, I was wondering if I can express the Economics problem as a state space control problem. The control variables, like $c, h$ would be set to the control function $v(\cdot)$.

UPDATE I updated the equations as per feedback from some of the viewers. I provided a bit more detail on the model, compared to the original simpler formulation. I also corrected some of the subscripts, etc.