Stochastic control problems in infinite horizon

64 Views Asked by At

Consider the following maximization problem and the wealth dynamics $$\max \mathbb{E}\left[\int_0^{\infty} \frac{1}{\gamma} e^{-\beta t} c_t^\gamma \mathrm{d} t\right]$$ $$\mathrm{d} X_t=X_t\left(r+\pi_t(\alpha-r)\right) \mathrm{d} t-c_t \mathrm{~d} t+\pi_t X_t \sigma \mathrm{d} W_t, \quad X_0=x.$$ The Hamilton–Jacobi–Bellman (HJB) equation is given by

$$\sup _{c \geq 0, \pi}\left\{\frac{1}{\gamma} c^\gamma-\beta V(x)+x(r+\pi(\alpha-r)) V^{\prime}(x)-c V^{\prime}(x)+\frac{1}{2} \pi^2 x^2 \sigma^2 V^{\prime \prime}(x)\right\}=0.$$

Do we have any book or paper discussing these kinds of problems? I am confused about whether we have the dynamics programming principle and why there is no time derivative term. Thank you very much.

2

There are 2 best solutions below

0
On

There are many books discussing stochastic control. Karatzas and Shreve, for example, contains a short section on this, as does Oksendal's Stochastic Differential Equations. There are many other books specifically on stochastic control as well.

For your specific questions: Yes, the dynamic programming principle does hold for this problem. There is no time derivative because the problem is infinite horizon. In finite horizon problems, the time dependence is typically based on the time remaining before the horizon. In an infinite horizon problem, the time remaining is always infinity. This often leads to a simpler solution because one gets an ODE for the value function (as you see in your post) instead of the PDE often seen in finite horizon problems.

0
On

A detailed account of the infinite horizon problem can be found in Controlled Markov Processes and Viscosity Solutions, by Fleming and Soner.