Constrained maximization of function and not-successful attempts in Python3

76 Views Asked by At

Firstly, I will present the problem.

Maximize vector function with constraints:

We have vector function. $\phi$ and $p$ are n-dimensional vectors. With given n (or if it is not possible - with n=2), find real values, that will maximize globally this function.

$$ f(\phi, p, \theta) = 1-\sum_{i=1}^{n} w_i (-\lambda_i \log_2\lambda_i - (1-\lambda_i) log_2(1-\lambda_i)) $$

$$ w_i = (1+\cos\phi_i \cos\theta)p_i $$

$$ \lambda_i = \frac{ 1 + cos(\phi_i - \theta) }{2(1+cos\phi_i cos\theta)} $$

Constraints:

$$ \sum_{i=1}^{n} p_i =1 $$

$$ \forall i\ldotp p_i >0 $$

$$ \sum_{i=1}^{n} w_i \lambda_i =\frac{1}{2} $$

$$ \forall i\ldotp \lambda_i >0 $$

Of course we should also take care about constraint

$$ \forall i\ldotp cos\phi_i cos\theta \neq -1 $$

to prevent the situation, when denominator is 0.

My attempt

I tried to solve this problem with scipy.optimize package. I had to pass the initial vector (initial guess). And value, which was given by algorithm was dependent on this initial guess. So I tried to take random initial guesses and tried to find maximal values among more and more initial guesses

import numpy as np
from scipy.optimize import minimize

def J(data):
    l = len(data)
    if (l%2==0 | l<2):
        raise Exception('Wrong length or arguments!')

    phi = data[0:l//2]
    p = data[l//2:l-1]
    theta = data[l-1]

    phi = np.array(phi)    
    p = np.array(p)

    w = (1 + np.cos(phi)*np.cos(theta))*p
    lam = (1+np.cos(phi-theta))/(2*(1+np.cos(phi)*np.cos(theta)))

    return -(1 - sum(w * ((-lam*np.log2(lam)) - (1-lam)*np.log2(1-lam))))

def probability_constraint(x):
    return sum(x[len(x)//2:len(x)-1])-1

def lambda_w_constraint(data):
    l = len(data)
    phi = data[0:l//2]
    p = data[l//2:-1]
    theta = data[l-1]

    phi = np.array(phi)    
    p = np.array(p)

    w = (1 + np.cos(phi)*np.cos(theta))*p
    lam = (1+np.cos(phi-theta))/(2*(1+np.cos(phi)*np.cos(theta)))    
    return sum(w*lam)-1/2

def lambda_greater_than_zero(data):
    l = len(data)
    phi = data[0:l//2]
    p = data[l//2:-1]
    theta = data[l-1]

    lam = (1+np.cos(phi-theta))/(2*(1+np.cos(phi)*np.cos(theta)))    
    return lam

def probability_positive(x):
    return x

cons = ({'type': 'ineq',
'fun' : lambda x: lambda_greater_than_zero(x)},
        {'type': 'eq',
'fun' : lambda x: probability_constraint(x)},
        {'type': 'eq',
'fun' : lambda x: lambda_w_constraint(x)},
        {'type':'ineq', 
'fun' : lambda x: probability_positive(x)}

)

def maximize(numiter):
    maxJ = -1000
    maxx = []

    for i in range(0, numiter):
        x0 = np.random.rand(5)*10
        res = minimize(J, x0, constraints=cons)
        r = J(res.x)
        if (r>maxJ):
            maxJ = r
            maxx = res.x

    return maxJ, maxx

maximize(10)

maximize(100)

I know, that this method is very naive and incorrect.

So my question is: do you know how to solve this problem? It can be in python3, but other languages (Matlab?) are also good for me.

1

There are 1 best solutions below

0
On BEST ANSWER

Tried it in MATLAB with the modelling toolbox YALMIP (disclaimer, developed by me)

theta = sdpvar(1);
n = 5;
p = sdpvar(n,1);
f = sdpvar(n,1);
w = sdpvar(n,1);
l = sdpvar(n,1);

Model = [w == (1 + cos(f)*cos(theta)).*p,
         l == (1 + cos(f-theta))./(2 + 2*cos(f)*cos(theta)),        
    sum(p) == 1, p>=0, sum(w.*p) == .5, l >= 0]
Model = [Model, -pi <= theta <= pi, -pi <= f <= pi];
objective = 0;
for i = 1:n
   objective = objective+w(i)*(entropy(l(i)) + entropy(1-l(i)));     
end
optimize(Model, objective)
value(f)
value(theta)

Both solvers fmincon and ipopt returned reasonable solutions. I use the entropy operator $entropy(x) = -x\log(x)$ as YALMIP can exploit some properties better when computing the derivatives and avoids some issues around $0$.

You can also test the global solver available in YALMIP

optimize(Model, objective, sdpsettings('solver','bmibnb'))

At least it appears to find the global solution for $n=2$.