Numerical optimization in python: solution partly wiggly and large error

200 Views Asked by At

Let $T = \{0,1,\ldots, N-1\}$. Consider the following problem: \begin{align} &\min_{(x(t))_{t \in T}}-\sum_{t \in T}{\beta^t \ln(x(t))}\\ \text{s.t.} \quad &\sum_{t \in T}{x(t)} = 1 \end{align} where $\beta \in (0,1)$. It's straightforward to show that the optimal solution is given by \begin{align} x^*(t) = \beta^t\frac{1-\beta}{1-\beta^N}. \end{align}

I tried to solve the problem numerically, but it gives me a weird behaved solution. Here is the python code:

from scipy.optimize import minimize
from math import log

N = 200
T = range(N)
b = 0.95

obj = lambda x: -sum(b**t * log(x[t]) for t in T)
bnds = tuple((1e-10, None) for t in T)
cons = {'type': 'eq', 'fun': lambda x:  sum(x[t] for t in T) - 1}
x0 = tuple(1/N for t in T)
res = minimize(obj, x0, method='SLSQP', bounds=bnds, constraints=cons)

x_sol = res.x
x_star = [b**t * (1-b) / (1-b**N) for t in T]

Below you find a picture of the true solution and the numerical one. As one can see, it is not a very good fit. Especially around $t \in [60,100]$ it's getting all wiggly.

enter image description here

Let me also show the error below. I think the error is quite bad.

enter image description here

  • How would I smooth out the solution and increase the accurancy?
2

There are 2 best solutions below

0
On

Try to avoid using bounds and constraints with scipy as they can result is failry bad fits.

In your example you can avoid the bounds $x\ge0$ by taking absolute value in your objective function. You can avoid the sum constraint by normalizing your input. Plus if you add some numpy vectorization to your code, you should end up with something similar to the below code, which produces much nicer fit.

from scipy.optimize import minimize
from math import log
import numpy as np

N = 200
b = 0.95
w = np.array([b**t for t in range(N)]).T

obj = lambda x: -np.dot(w, np.log(abs(x) / sum(abs(x))))
x0 = np.random.rand(N) + 1
options = dict() # it's worth to play around with options
res = minimize(obj, x0, method='SLSQP', options=options)

x_sol = abs(res.x) / sum([abs(s) for s in res.x])
x_star = np.array([b**t * (1-b) / (1-b**N) for t in range(N)])
xx = range(len(x_star))

import matplotlib.pyplot as plt
plt.plot(xx, x_star, xx, x_sol)

enter image description here

0
On

your parameters return the message:

res['message']
'Iteration limit exceeded'

Increasing the iterations didn't help much, but after some trial and error, I found that setting:

bnds = tuple((1e-10, 10) for t in T)
res = minimize(obj, x0, method='SLSQP', bounds=bnds, constraints=cons, options={'ftol':1e-7, 'maxiter': 1000})

seems to work nicely. I have no idea why the optimizer is so finicky. plt.figure();plt.plot(res.x)