Why is the 1-norm of a vector not the same a 1-norm of a matrix

869 Views Asked by At

I'm curious as to why the $||\cdot||_1$ of a vector can't be a norm say $||\cdot||_1$ of a matrix. By the definition of $L_1$ norm for vectors it's $\sum|x_i|$ of a vector so why is it not the case for a matrix since we could just as easily do $\sum|x_{i,j}|$? Is there a property we are breaking? It's definitely positive definite, homogeneous, and seems to pass triangle inequality for $\mathbb{R}$, maybe not the case for $\mathbb{C}$?

1

There are 1 best solutions below

1
On BEST ANSWER

It's a matter of definitions.

In some sense, there is a $1$-norm like that for matrices. Given $A \in \mathbb{C}^{m \times n}$, we can loosely interpret it as a vector in $\mathbb{C}^{mn}$ and define

$$\newcommand{\nc}{\newcommand} \nc{\R}{\mathbb{R}} \nc{\C}{\mathbb{C}} \nc{\Q}{\mathbb{Q}} \nc{\Z}{\mathbb{Z}} \nc{\N}{\mathbb{N}} \nc{\P}{\mathbb{P}} \nc{\para}[1]{\left( #1 \right)} \nc{\abs}[1]{\left| #1 \right|} \nc{\br}[1]{\left[ #1 \right]} \nc{\set}[1]{\left\{ #1 \right\}} \nc{\ip}[1]{\left \langle #1 \right \rangle} \nc{\norm}[1]{\left\| #1 \right\|} \nc{\floor}[1]{\left \lfloor #1 \right \rfloor} \nc{\ceil}[1]{\left \lceil #1 \right \rceil} \nc{\setb}[2]{\set{#1 \, \middle| \, #2}} \nc{\dd}{\mathrm{d}} \nc{\dv}[2]{\frac{\dd #1}{\dd #2}} \nc{\p}{\partial} \nc{\pdv}[2]{\frac{\partial #1}{\partial #2}} \nc{\a}{\alpha} \nc{\b}{\beta} \nc{\g}{\gamma} \nc{\d}{\delta} \nc{\ve}{\varepsilon} \nc{\t}{\theta} \nc{\m}[1]{\begin{bmatrix} #1 \end{bmatrix}} \nc{\AA}{\mathcal{A}} \nc{\BB}{\mathcal{B}} \nc{\CC}{\mathcal{C}} \nc{\FF}{\mathcal{F}} \nc{\GG}{\mathcal{G}} \nc{\II}{\mathcal{I}} \nc{\JJ}{\mathcal{J}} \nc{\KK}{\mathcal{K}} \nc{\PP}{\mathcal{P}} \nc{\RR}{\mathcal{R}} \nc{\SS}{\mathcal{S}} \nc{\TT}{\mathcal{T}} \norm{A}_{1,\C^{mn}} = \sum \abs{a_{i,j}}$$

This will be a norm, sure, but it's not exactly what we want from a matrix norm. This is just a more fanciful version of a vector norm.


We define the matrix norm $\norm{\cdot}_{\text{induced}}$ induced by a vector norm $\norm{\cdot}$ to be

$$\norm{A}_{\text{induced}} := \sup_{\norm{x} = 1} \norm{Ax}$$

(among several other equivalent definitions and assuming appropriate dimensions on $A$ and $x$). This norm actually has a noteworthy geometric intuition behind it: it is the maximum "scale factor" by which $A$ scales $x$.


Of course, you can have norms that are not induced by vector norms. The Frobenius norm is a classical and useful example, which is like the $2$-norm on $\C^{mn}$ for matrices $A \in \C^{m \times n}$:

$$\norm{A}_F := \sqrt{ \sum_{i,j} \abs{a_{i,j}}^2 }$$

Your proposed matrix norm is a norm, and a $1$-norm in a certain sense, but it is not equivalent to the induced $1$-norm. (One is defined by a straightforward summation, the other as a "maximum stretching factor", why should they be the same?)


In fact, your matrix norm is not induced by any vector norm!

To prove this, note that in the square case,

$$\norm{I_{n \times n}}_{\text{induced}} = 1$$

regardless of the vector norm it is induced by. However,

$$\norm{I_{n \times n}}_{1,\C^{n^2}} = n$$

You can work analogously in the non-square case for appropriate choice of matrix.