L1 regularization success rates (MATLAB minL1lin function)

51 Views Asked by At

Following a study on Making do with less: An introduction to Compressed Sensing I am attempting to simulate the weighing of 100 coins. There are five sets of coins. In each set there are counterfeit coins (with deviating weight) with the first set having one counterfeit, the second set having two, and so on.

The code below attempts to reconstruct a set of weights x of 100 coins, and then plot the success rate of doing so. Elements of x represent difference from the correct weight; they will be zeroes except at the index of counterfeits, where they equal 1.

Generate a sensing matrix Phi. The elements of Phi are randomly generated integers, either 1 or 0. Let b = Phi*x. Then take b and Phi, and attempt to reconstruct the weights. I have used The minL1lin function to calculate the minimum L1-norm solution of a linear equations Cx=d.

N=100; %number of coins
weights = zeros([100,1]); %a vector of the weights

lb = zeros(N, 1); %lower bound constraints
ub = ones(N, 1); %upper bound constraints

row_axis=5:40; %to be used as x-axis values in the plot
results=zeros(5,36); %to store results, each row representing a different number of counterfeit coins

for num_coins=1:5
    weights(54+num_coins) = 1;
    a=1;
    for rows=5:40
        y=[0 0]; %[correct reconstructions, total tried]
        for i=1:1000
            phi = randi([0, 1],rows,N); %sensing matrix
            b = phi * weights;
            [X,resnormX] = minL1lin(phi,b,[],[],[],[],lb,ub,[],optimset('Display','none')); %X is the reconstructed matrix
            if all(X(:) == weights(:))
                y(1) = y(1)+1; %if X == weights
            end
            y(2) = y(2)+1; %increment for every trial, this will equal 1000
        end
        results(num_coins,a)=y(1);
        a = a+1;
end
end


function [x,varargout]=minL1lin(C,d,varargin)

% The submission minL1lin finds the  minimum L1-norm solution of the linear equations C*x=d, 
% optionally under linear constraints. It is similar to the Optimization Toolbox's lsqlin except that it minimizes with 
% respect to the L1 norm by reformulating the problem as a linear program. The input/output syntax,
%  
%    [x,resnorm,residual,exitflag,output,lambda] = 
%           minL1lin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
%  
%  is the same as for lsqlin() and all the usual defaults for A,b,..,x0 from
%  lsqlin apply here. However, the "options" are those of linprog() which is used 
%  internally. And, of course, the minimization is instead over norm(C*x-d,1)
%  rather than norm(C*x-d,2).
% 
% 
if length(varargin)<8
   varargin(end+1:8)={[]}; 
end
[A,b,Aeq,beq,lb,ub,x0,options]=deal(varargin{:});
[m,n]=size(C);
    
    f=[zeros(1,n), ones(1,m)];
    
   
    Ax=[C,-speye(m);-C,-speye(m)];
    bx=[d;-d];
  [~,nx]=size(Ax);   
 
    LB(1:nx)=-inf; 
       LB(n+1:end)=0;
    UB(1:nx)=inf;
    LB(1:length(lb))=lb;
    UB(1:length(ub))=ub;
    
    if ~isempty(A),
       A(end,nx)=0;
    end
    
    if ~isempty(Aeq),
       Aeq(end,nx)=0;
    end
    
    A=[A;Ax]; b=[b;bx];
    
   
    [xz,~, varargout{3:nargout-1}]=linprog(f,A,b,Aeq,beq,LB,UB,x0,options);
    
    if ~isempty(xz)
     x=xz(1:n);
    else
      x=xz;
      varargout(1:2)={[],[]}; 
      return
    end
    
    if nargout>=2 %truncate resnorm if requested
       residual=C*x-d;
       resnorm=norm(residual,1);
       varargout(1:2)={resnorm,residual}; 
    end
end

The correct number of weights vs. success rate plot should look like this: enter image description here

Currently my result looks like this: enter image description here

Would anyone kindly explain why my results start to become less accurate as the number of counterfeit coins increase?