Why is Hildreth's QP algorithm so simple?

376 Views Asked by At

I have Hildreth's quadratic programming (QP) algorithm and I find it very simple. Too simple so it becomes very much suspect. Why are this algorithm so simple, but other QP algorithms are complex and heavy, but still return the same result as Hildreth's QP algorithm.

function [x, solution] = quadprog(Q, c, A, b)
      % Assume that the solution is true
      solution = true;

      % Set number of iterations
      number_of_iterations = 255;

      % Same as in C code
      FLT_EPSILON = 1.19209290e-07;

      % Unconstrained solution
      x = -linsolve(Q, c);

      % Constraints difference
      K = b - A*x;

      % Check constraint violation
      if(sum(K <= 0) == 0)
        return; % No violation
      end

      % Create P
      P = linsolve(Q, A');

      % Create H = A*Q*A'
      H = A*P;

      % Solve lambda from H*lambda = -K, where lambda >= 0
      [m, n] = size(K);
      lambda = zeros(m, n);
      for km = 1:number_of_iterations
        lambda_p = lambda;

        % Use Gauss Seidel
        for i = 1:m
          w = -1.0/H(i,i)*(K(i) + H(i,:)*lambda - H(i,i)*lambda(i));
          lambda(i) = max(0, w);
        end

        % Check if the minimum convergence has been reached
        w = (lambda - lambda_p)'*(lambda - lambda_p);
        if (w < FLT_EPSILON)
          break;
        end

        % Check if the maximum iteration have been reached
        if(km == 255)
          solution = false;
          return;
        end
      end

      % Find the solution: x = -inv(Q)*c - inv(Q)*A'*lambda
      x = x - P*lambda;
    end

If I try an example:

% Objective function
Q = [1.200 -5.1000; -5.1000 26.0000];
c = [-2; -6];

% Constraints
A = [1 1; -1 2; 2 1];
b = [2; 2; 3];

% Hildreth's method
x = quadprog(Q, c, A, b)

% Internal QP-solver
[x, ~, info] = qp([], Q, c, [], [], [], [], [], A, b)

And the result was

  x =

   1.2842
   0.4315

  x =

   1.2842
   0.4315

  info =

  scalar structure containing the fields:

    solveiter = 2
    info = 0

Question:

Hildreth's QP algorithm gives the same result as other QP algorithms. So why should I use for example KTT conditions and such other advanced things, if Hildreth's QP algorithms does the job?

1

There are 1 best solutions below

9
On

If $Q$ is positive definite, one approach (which is even simpler than Hildreth's algorithm) is to transform the problem into a form which can be solved using Matlab's lsqnonneq() function.

Given the QP problem $$\eqalign{ \min_x\;&\tfrac 12x^TQx + c^Tx \\ {\rm s.t.}\;&Ax \le b \\ }$$ calculate the Cholesky factorization $Q=L^TL$ and the following variables $$\eqalign{ \def\m#1{\left[\begin{array}{c}#1\end{array}\right]} \def\L{L^{-1}} \def\Q{Q^{-1}} \def\e{{\large\rm e}_1} f = b + A\Q c,\qquad \e=\m{{\tt1}\\ c-c},\qquad M = A\L,\qquad P = \m{-f^T \\ -M^T} \\ }$$ Formulate the Non-Negative Least Squares problem $$\eqalign{ \min_w\;&\|Pw-\e\|^2_2 \\ {\rm s.t.}\;&w \ge 0 \\ }$$ The solution of which is $\;w={\rm lsqnonneg}(P,\e)$

The solution of the original QP problem is $$\eqalign{ x &= -\Q\left(c+\frac{A^Tw}{{\tt1}+w^Tf}\right) \\ }$$ However, if $\:Pw=\e\:$ then the problem is infeasible.

$\sf NB\!:\:$ The above method was described in this paper by Alberto Bemporad.

Update

Here is the pseudocode for the Sequential Coordinate-wise Algorithm.

Initialize $$\eqalign{ \def\LR#1{\left(#1\right)} \def\ek{{\large\varepsilon}_k} z &= -P^Te_1 \\ w &= \LR{z-z} \,\equiv\, 0 \\ }$$ Iterate $$\eqalign{ &{\rm for\;} i = 1:{\large i}_{max} \\ &\qquad y = w \\ &\qquad{\rm for\;} k = 1:n \\ &\qquad\qquad f = P^TP\ek \\ &\qquad\qquad \alpha = \min\LR{w_k,\;\frac{z_k}{f_k}} \\ &\qquad\qquad w = w - \alpha\ek \\ &\qquad\qquad z \:=\, z - \alpha f \\ &\qquad{\rm end} \\ &\qquad{\rm if}\LR{{\|w-y\|_\infty\le 10^{-12}}}\;{\rm return\;}w \\ &{\rm end} \\ }$$ where $\ek$ denotes the $k^{th}$ euclidean basis vector and $\LR{f_k,z_k}$ are the $k^{th}$ components of their respective vectors. The presence of ${\large i}_{max}$ prevents infinite looping.