A convex optimization problem over one vector and one variable

55 Views Asked by At

I have the following problem, $$ \min_{\mathbf{w},v} \sum_{j=1}^{m}\log(1+\exp(-b_{j}(a_j^T\mathbf{w} + v))) + (\rho/2)\mathbf{w}^T(\mathbf{w} - \mathbf{k}) + (\rho/2)v(v+x) $$

All vectors and variables are known, except for $$\mathbf{w}, v$$

My questions are:

  1. Is there an efficient algorithm to solve this?
  2. Have you ever seen the same or a very similar optimization problem like the one above?
1

There are 1 best solutions below

1
On

Yes, this is fairly standard model, a special case of exponential cone programming. You can solve it using any nonlinear solver, or use a solver developed for the exponential cone case.

Here is some experimentation in the MATLAB Toolbox YALMIP (developed by me) using both a straightforward general nonlinear approach, and an exponential cone approach with the exponential cone solver ECOS.

n = 5;m = 3;
A = randn(n,m);
b = rand(n,1);
w = sdpvar(m,1);
v = sdpvar(1);

% Straight on, just solve as general nonlinear program
objective = sum(log(1 + exp(-b.*(A*w + v)))) + w'*w + v^2;
optimize([],objective)

% Use explicit logsumexp operator, improves performance in general nonlinear 
% solver as derivative callbacks are optimized, and it allows us to use 
% solvers developed particularly for this problem class

options = sdpsettings('solver','ecos');
objective = sum(logsumexp([zeros(n,1) -b.*(A*w + v)]')) + w'*w + v^2;
optimize([],objective,options)

% Compare with standard nonlinear solver on logsumexp form
options = sdpsettings('solver','fmincon');
optimize([],objective,options)