Let $T = \{0,1,\ldots, N-1\}$. Consider the following problem: \begin{align} &\min_{(x(t))_{t \in T}}-\sum_{t \in T}{\beta^t \ln(x(t))}\\ \text{s.t.} \quad &\sum_{t \in T}{x(t)} = 1 \end{align} where $\beta \in (0,1)$. It's straightforward to show that the optimal solution is given by \begin{align} x^*(t) = \beta^t\frac{1-\beta}{1-\beta^N}. \end{align}
I tried to solve the problem numerically, but it gives me a weird behaved solution. Here is the python code:
from scipy.optimize import minimize
from math import log
N = 200
T = range(N)
b = 0.95
obj = lambda x: -sum(b**t * log(x[t]) for t in T)
bnds = tuple((1e-10, None) for t in T)
cons = {'type': 'eq', 'fun': lambda x: sum(x[t] for t in T) - 1}
x0 = tuple(1/N for t in T)
res = minimize(obj, x0, method='SLSQP', bounds=bnds, constraints=cons)
x_sol = res.x
x_star = [b**t * (1-b) / (1-b**N) for t in T]
Below you find a picture of the true solution and the numerical one. As one can see, it is not a very good fit. Especially around $t \in [60,100]$ it's getting all wiggly.
Let me also show the error below. I think the error is quite bad.
- How would I smooth out the solution and increase the accurancy?



Try to avoid using bounds and constraints with scipy as they can result is failry bad fits.
In your example you can avoid the bounds $x\ge0$ by taking absolute value in your objective function. You can avoid the sum constraint by normalizing your input. Plus if you add some numpy vectorization to your code, you should end up with something similar to the below code, which produces much nicer fit.