Solving minimization problem $L_2$ IRLS (Iteration derivation)

130 Views Asked by At

In the article ''' Chartrand, Rick, and Wotao Yin. "Iteratively reweighted algorithms for compressive sensing." Acoustics, speech and signal processing, 2008. ICASSP 2008. IEEE international conference on. IEEE, 2008. '''

It is given that one iteration of iterative reweighed least squares problem can be written as $\min \sum_{i=1}^N w_i u_i^2 $ subject to $\Phi u = b $ which has a closed form solution given as

$u^{(n)} = Q_n \Phi^T (\Phi Q_n \Phi^T)^{-1} b$. where $Q_n$ is the diagonal matrix containing entries $1/(|u_i^{(n-1)}|^2 + \epsilon )^{1/2} $ And the closed form equation can be derived from the Euler Lagrange equation. Can someone please help me to get the derivation of the iteration?\ I understood till $$\min_u u^TWu+λ|| \Phi u−b||_2^2 $$ which can be written as $(\bigtriangledown=0)$ which gives $$ W u + \lambda (\Phi^T \Phi u - \Phi^T b) =0$$

1

There are 1 best solutions below

0
On

I'll show how to minimize $(1/2) x^T W x$ subject to the constraint that $Ax = b$, assuming the matrix $W$ is symmetric positive definite. There exists a vector $z$ of Lagrange multipliers such that solving this optimization problem is equivalent to minimizing $L(x,z) = (1/2) x^T W x + \langle z, Ax - b\rangle$ with respect to $x \in \mathbb R^n$. Setting the gradient with respect to $x$ equal to $0$, we find that $$ Wx + A^T z = 0, $$ or in other words $x = -W^{-1} A^T z$. Plugging into $Ax = b$, we see that \begin{align} -A W^{-1} A^T z = b \implies z = -(A W^{-1} A^T)^{-1} b. \end{align} It follows that $$ x = W^{-1} A^T(A W^{-1} A^T)^{-1} b. $$