MATLAB: minimize function using x value from previous iteration

817 Views Asked by At

I'm trying to develop an algorithm for a proximal point method defined as:

$$ \underset{x \in \rm I\!R^n}{\arg\min} f(x) + \lambda g(x) $$

where f(x) is a convex and coercive function and also lambda*g(x).

The algorithm goes like this:

$$ x^0 \in \rm I\!R^n $$ $$ x^{k+1} = \underset{x \in \rm I\!R^n}{\arg\min} f(x)+\lambda_k||x-x^k||^2 $$

where $$ 0 < \lambda_k < \lambda' $$

Lambda_k is a sequence of real numbers.

I'm very noob in MATLAB and I'm very confused about the functions of minimizations. I was trying to do something like this:

function [x, fval] = proximal_algorithm(x0)
global xprev;
xprev = x0;

options = optimset('Algorithm','active-set', 'Display', 'final', 'MaxFunEval', 10000, 'MaxIter', 10000, 'TolFun', 1e-8, 'TolX', 1e-8, 'OutputFcn', @myoutput);
[x, fval] = fminunc(@objfun , x0, options);

function stop = myoutput(x, optimValues, state);
    stop = false;

    if isequal(state,'iter')
        fprintf('x%d = %f xprev = %f\n', optimValues.iteration, x, xprev);
        xprev = x;
    end
end

function f = objfun(x)
    f = (x-1)^2 + norm(x - xprev)^2;
end

end

I don't know how fminunc works, even though I went through the documentation and I couldn't find something similar from what I'm trying to do.

I even tried to do a for loop and call fminunc or fminsearch in each iteration, but both of them don't need a for loop as they have it inside them.

How can I call a minimize function only once??

1

There are 1 best solutions below

1
On BEST ANSWER

1) For your specific objective function "objfun" the minimizer can be found algebrically in one step. e.g. $x^*= 0.5(1 + x_{prev})$

2) "fminunc" is a Quasi-Newton method (default) or Trust Region method for finding local minima. It works best when you supply the gradient and hessian of your function. If you choose this route, I suggest you write

function [f,g,h] = objfun(x)

$\quad$ f = (x-1)^2 + norm(x - xprev)^2; % it seems x is 1D?

$\quad$ g = 2*(x-1) + 2*(x-xprev);

$\quad$ h = 2*eye(size(x)); % identity;

end

Otherwise, for low dimensional problems fminsearch is a solid, derivative free method based on the Nelder Mead Simplex method.

2) To implement you algorithm a you have stated it, you must do a loop. Suppose you have sequence $\lambda_1 < \lambda_2 < \dots < \lambda_n = \lambda.$ You algorithm is:

Define $x^0.$

For $k=1,2,\dots,n:$

$\quad$ Define $F_k(x)= f(x) + \lambda_k g(x) $

$\quad$ Optimize $F_k(x)$ using

$\qquad \ \ \bullet$ "some" unconstrained minimization algorithm

$\qquad \ \ \bullet$ initial guess $x_{k-1}$