Minimizing the operator norm induced by $\ell _{2}$ and $\ell _{\infty}$-norm for vectors to solve for diagonal matrix

151 Views Asked by At

Given an operator represented in matrix form as $B$ of size $n\times m$ such that $m < n$ and square matrix A of size $m \times m$, how do we minimize the operator norms induced by $\ell _{2}$ and $\ell _{\infty}$-norm for vectors to solve for diagonal matrix D?

$$\min_{D} \Vert A - B^HDB \Vert_{op,2}$$ $$\min_{D} \Vert A - B^HDB \Vert_{op, \infty}$$

My approach: Does it make sense to use vec-operator identity i.e., $$vec(B^HDB) = (B^T\otimes B^H)vec(D) $$ in the beginning and then proceed with the minimization as it might tranform my problem into a standard optimization problem: $$\min_{x}\Vert b- Hx \Vert_N $$ such that, \begin{align} H &= (B^T\otimes B^H)\\ x &= vec(D)\\ b &= vec(A)\\ N &= norm(any) \end{align}

I am not sure that this is the right approach as for any matrix $P $ and its vectorized form $vec(P)$, the matrix and vecotor 2-norms are not equal, i.e., $$ \Vert P \Vert_{op,2} \neq \Vert vec(P) \Vert_{2} = \Vert vec(P) \Vert_{F}$$ Also,$$ \Vert P \Vert_{op,\infty} \neq \Vert vec(P) \Vert_{\infty} $$

Note: $\Vert .\Vert_{F}$ implies frobenius norm, $\Vert .\Vert_{OP,\infty}$ implies operator norm induced by $\ell _{\infty}$- and $\Vert .\Vert_{OP,2}$ implies operator norm induced by $\ell _{2}$-norm for vectors.