I have read about linear equality constrained least squares of the form
$min\ ||Ax - b||^2_2 \\ s.t.\ Cx = d$
which can be solved as
$\begin{bmatrix} {A}^{T} A & {C}^{T} \\ C & 0 \end{bmatrix} \begin{bmatrix} \hat{x} \\ \hat{\nu} \end{bmatrix} = \begin{bmatrix} {A}^{T} b \\ d \end{bmatrix}$
where $\nu$ is the Lagrangian multiplier.
How does this work when you replace $x$, $b$, and $d$ vectors with matrices? That is, the problem then becomes
$min\ ||AX - B||^2_F \\ s.t.\ CX = D$
where $_F$ denotes the Frobenius norm and I have to solve for optimal $X$.
Introduce the vector $z := \textrm{vec}(X)$, where the vectorization stacks the columns of $X$.
Next, you can write $CX=D$ and $\|AX-B\|_F^2$ in terms of $z$ and you will obtain the well-known linear inequality constrained least squares problem. It will just take some algebra auto-pilot work to make sure your reformulation in terms of $z$ is equivalent to your original formulation in $X$.