Optimal control of a certain Poisson process

202 Views Asked by At

I am trying to understand this paper in which the optimal expected value of a certain Poisson process is computed. Following the notation of op. cit, let $N_s$ ($0 \le s \le t$) be the associated counting process. The intensity of the process is given by $\lambda_s = \lambda (p_s)$, where $\lambda$ is a known decreasing function and $p_s$ is a real-valued stochastic process depending on the control $u$. We wish to find a control $u$ such that $$J_u (n, t) = \mathbf{E}_u \left[ \int_0^t p_s \mathrm{d} N_s \right]$$ is maximised, where $\mathbf{E}_u$ denotes the expectation operator under control $u$.

Let $J^* (n, t) = \sup \{ J_u (n, t) : u \in \mathscr{U} \}$, where $\mathscr{U}$ is the set of controls such that $N_t \le n$ a.s. In op. cit., there is a heuristic dynamic programming argument showing that $$\frac{\partial J^* (n, t)}{\partial t} = \lambda (p^* (n, t)) \left( p^* (n, t) - J^* (n, t) + J^* (n - 1, t) \right)$$ where $p^* (n, t)$ is the $p$ maximising $\lambda (p) \left( p - J^* (n, t) + J^* (n - 1, t) \right)$. This argument seems plausible enough to me, so I have no issue here.

Now suppose $\lambda (p) = a \exp (- \alpha p)$. The following solution is given in op. cit.: $$J^* (n, t) = p^* (n, t) \log \left( \sum_{k = 0}^n \frac{(\lambda (1 / \alpha) t)^k}{k !} \right)$$ $$p^* (n, t) = \frac{1}{\alpha} + J^* (n, t) - J^* (n - 1, t)$$ How do I verify that this is indeed a solution, or better yet, how might I have found this solution in the first place?

1

There are 1 best solutions below

1
On

The solution can be obtained by making some clever substitutions.

First, define $q^* (n, t) = p^* (n, t) - J^* (n, t) + J^* (n - 1, t)$. So $q^* (n, t)$ is the $q$ that maximises $\lambda (q + J^* (n, t) - J^* (n - 1, t)) q$. But $\lambda (p) = a \exp (- \alpha p)$, so $q^* (n, t)$ is the $q$ that maximises $a \exp (-\alpha q) q$, namely $1 / \alpha$. Therefore, $$p^* (n, t) = \frac{1}{\alpha} + J^* (n, t) - J^* (n - 1, t)$$ as claimed.

Now, the differential equation becomes: $$\frac{\partial J^* (n, t)}{\partial t} = \frac{a}{\alpha e} \frac{\exp (\alpha J^* (n - 1, t))}{\exp (\alpha J^* (n, t))}$$ Define $F^* (n, t) = \exp (\alpha J^* (n, t))$ and rewrite to get: $$\frac{1}{\alpha F^* (n, t)} \frac{\partial F^* (n, t)}{\partial t} = \frac{a}{\alpha e} \frac{F^* (n - 1, t)}{F^* (n, t)}$$ This simplifies to: $$\frac{\partial F^* (n, t)}{\partial t} = \frac{a}{e} F^* (n - 1, t)$$ The boundary conditions are $J^* (0, t) = 0$ and $J^* (n, 0) = 0$, hence: $$F^* (n, t) = \sum_{k = 0}^n \frac{(a t / e)^k}{k !}$$ Therefore, $$J^* (n, t) = \frac{1}{\alpha} \log \left( \sum_{k = 0}^n \frac{(a t / e)^k}{k !} \right)$$ as claimed.