Linear equality constrained least squares for matrices

160 Views Asked by At

I have read about linear equality constrained least squares of the form

$min\ ||Ax - b||^2_2 \\ s.t.\ Cx = d$

which can be solved as

$\begin{bmatrix} {A}^{T} A & {C}^{T} \\ C & 0 \end{bmatrix} \begin{bmatrix} \hat{x} \\ \hat{\nu} \end{bmatrix} = \begin{bmatrix} {A}^{T} b \\ d \end{bmatrix}$

where $\nu$ is the Lagrangian multiplier.

How does this work when you replace $x$, $b$, and $d$ vectors with matrices? That is, the problem then becomes

$min\ ||AX - B||^2_F \\ s.t.\ CX = D$

where $_F$ denotes the Frobenius norm and I have to solve for optimal $X$.

2

There are 2 best solutions below

0
On

Introduce the vector $z := \textrm{vec}(X)$, where the vectorization stacks the columns of $X$.

Next, you can write $CX=D$ and $\|AX-B\|_F^2$ in terms of $z$ and you will obtain the well-known linear inequality constrained least squares problem. It will just take some algebra auto-pilot work to make sure your reformulation in terms of $z$ is equivalent to your original formulation in $X$.

0
On

Assume $X=[x_1 x_2 ...x_n]$ where $x_i$ are columns of the $X$. Similarly assume $B=[b_1 b_2 ... b_n]$ where $b_i$ are columns of the $B$ and $D=[d_1 d_2 ... d_n]$ where $d_i$ are columns of the $D$.

Then, we have:

$\min_{CX=D}\|AX-B\|^2_F=\min_{CX=D}\|A[x_1 x_2 ...x_n]-[b_1 b_2 ... b_n]\|^2_F$

$=\min_{CX=D}\|(Ax_1-b_1)+(Ax_2-b_2)+...+(Ax_n-b_n)||^2_F$

$=\min_{CX=D}\|Ax_1-b_1\|^2_F+\min_{CX=D}\|Ax_2-b_2\|^2_F+...+\min_{CX=D}\|Ax_n-b_n\|^2_F$

$=\min_{Cx_1=d_1}\|Ax_1-b_1\|^2_F+\min_{Cx_2=d_2}\|Ax_2-b_2\|^2_F+...+\min_{Cx_n=d_n}\|Ax_n-b_n\|^2_F$

This means the problem $\min_{CX=D}\|AX-B\|^2_F$ can be divided into those $n$ smaller problems.