What's the benefit of using dynamic programming (backward induction) instead of applying global minimizer

328 Views Asked by At

I'm interested in multistage optimization problems.

Let's take for example the following paper, there we have a optimization problem of the form:

$$\max \sum_{i=1}^{n+1}r^L_ix_i^L$$

such that

$$ x^l_i=r^{l-1}_i x_i^{l-1}-y_i^l+z^l_i,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$ x^l_i=r^{l-1}_{n+1} x_{n+1}^{l-1}+\sum_{i=1}^n(1-\mu^l_i)y_i^l-\sum_{i=1}^n(1+\nu_i^l)z^l_i$$ $$y^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$x^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$z^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ where some $x_i^l$ is the value (in dollar) of an asset $i$ at time $l$, $r_i^l$ is the asset return, $y^l_i$ and $z^l_i$ are the amount of asset sold and bought. $\mu^l_i $ and $\nu_i^l$ have also economical interpretation, but are not that important for the question. Assuming everthing is deterministic, we can solve this problem using interior points / simplex method since it is an "simple" LP. On the other hand I think one could solve this via dynamic programming approach. What would be the advantage or disadvantage of this? Does the situation change if I apply a "utility function" $f$(convex let's say), i.e.

$$\max \sum_{i=1}^{n+1}f(r^L_ix_i^L)$$

Many thanks for the insights.

1

There are 1 best solutions below

0
On

Usually dynamic programming methods (if applicable to the problem - in your question its applicable) are faster than LP methods.