Norm of Hermiticity-preserving linear map attained on rank-1 state?

177 Views Asked by At

Given some linear map $T:(\mathbb C^{n\times n},\|\cdot\|_1)\to (\mathbb C^{m\times m},\|\cdot\|_1)$ --- where $\|\cdot\|_1={\rm tr}(|\,\cdot\,|)$ is the trace norm, i.e. the sum of the input's singular values --- the question is the following:

If $T$ preserves Hermiticity (i.e. $T(A)^*=T(A^*)$ for all $A\in\mathbb C^{n\times n}$) is the operator norm of $T$ attained on a Hermitian matrix, that is, is it true that $$ \|T\|_{1\to 1}=\sup_{\substack{A\in\mathbb C^{n\times n}\\A\text{ Hermitian, }\|A\|_1\leq 1}}\|T(A)\|_1\,? $$

Two observations: 1. $\geq$ is obvious so we would only have to show $\leq$. 2. Because the trace norm is convex and the trace norm unit ball is the convex hull of rank-1 matrices $uv^*$ (with $\|u\|=\|v\|=1$) this can be re-formulated as follows:

If $T$ preserves Hermiticity, is it true that the operator norm of $T$ is attained on a rank-1 state, that is, $$ \sup_{v,w\in\mathbb C^n,\|v\|=\|w\|=1}\|T(vw^*)\|_1\leq\sup_{v\in\mathbb C^n,\|v\|=1}\|T(vv^*)\|_1\,? $$

This is true if $T$ is positive (i.e. maps positive semi-definite elements to positive semi-definite elements) as a consequence of the Russo-Dye theorem, cf. Theorem 3.39 in "The Theory of Quantum Information" by Watrous, 2018. However, the proof given therein breaks down if $T$ is not positive but only Hermiticity-preserving as it relies on the fact that $\|\sum_j u_jP_j\|\leq\|u\|_\infty\|\sum_j P_j\|$ for all $P_j\geq 0$ (Lemma 3.3 in his book).

Interestingly, this property holds for the "stronger" completely bounded trace norm --- sometimes called diamond norm --- which is defined as $$ \|T\|_\diamond:=\max_{k\in\mathbb N}\|T\otimes{\rm id}_{\mathbb C^{k\times k}}\|_{1\to 1}\,. $$ The idea here (cf. Lemma 3.50 ff.) is that if $X$ attains the operator norm of $T$, then the norm of $T\otimes{\rm id}$ on $$ \frac12\begin{pmatrix}0&X\\X^*&0\end{pmatrix} $$ is $\|T\|_{1\to 1}$ because $T(X^*)=T(X)^*$. Using the spectral decomposition of this block matrix together with convexity of the norm yields "the" desired rank-1 state $uu^*$.

So in some sense having access to "more dimensions" allows for a symmetrization of the problem. However, numerics suggest that this is not necessary: using that every Hermiticity-preserving map is the difference of two completely positive maps (cf. Theorem 2.25) I wrote some rough Mathematica code generating random cp maps and finding $\sup_{u,v}\|T(uv^*)\|_1$ and $\sup_u\|T(uu^*)\|_1$ via random search:

(*Initialization*)d = 4;
n = 0;
c = 0;
cnew = 0;
nmax = 20000;
A = IdentityMatrix[d];
B = IdentityMatrix[d];
A0 = IdentityMatrix[d];
v = UnitVector[d, 1];
RND = ResourceFunction["RandomUnitVector"][d, 2];
dmax = RandomInteger[{1, d^2}];
Vs1 = Table[RandomReal[{-1, 1}, {d, d}], {dmax}];
Vs2 = Table[RandomReal[{-1, 1}, {d, d}], {dmax}];
f1[X_] := Total[ConjugateTranspose[#] . X . # & /@ Vs1];
f2[X_] := Total[ConjugateTranspose[#] . X . # & /@ Vs2];
f[X_] := f1[X] - f2[X];
(*While loop for sup of||f(uv*)||*)
While[n < nmax, RND = ResourceFunction["RandomUnitVector"][d, 2];
 A = Transpose[{RND[[1]]}] . {RND[[2]]};
 cnew = Total[SingularValueList[f[A]]];
 If[cnew > c, B = A; c = Max[c, cnew];
  If[n > nmax/50, PrintTemporary[c]]; n = 0];
 n++]; Print["non-Herm=", c]; c = 0; n = 0;
(*While loop for sup of||f(uu*)||*)
While[n < nmax, v = ResourceFunction["RandomUnitVector"][d];
 A = Transpose[{v}] . {v};
 cnew = Total[SingularValueList[f[A]]];
 If[cnew > c, B = A; c = Max[c, cnew];
  If[n > nmax/50, PrintTemporary[c]]; n = 0];
 n++]; Print["Herm=", c]

All the times I ran this code I did not find a violation of the conjecture, i.e. not once was the norm approximation (on $uv^*$) larger than the rank-1-state value (on $uu^*$).

What makes this problem feel so annoying is that it seems like it should be obvious but so far I have not been able to prove this, not even for some simple classes of maps such as distances of unitary channels (e.g., $A\mapsto A-UAU^*$ for some $U$ unitary) or Hadamard product maps (e.g., $A\mapsto X\circ A$ for $X$ Hermitian). Problems which prevent a straightforward proof are that

  • the usual convexity arguments do not work because the convex hull operator of Hermitian and skew-Hermitian matrices from the unit ball are a strict subset of the unit ball (consider $e_1e_2^*$)
  • $T$ preserving Hermiticity does not hand down any nice properties to $T^*(V)$ for $V$ unitary; at least I don't see why it should. Therefore there is no straightforward application of some kind of Russo-Dye argument
1

There are 1 best solutions below

0
On BEST ANSWER

Today I was made aware of the 2004 paper "Notes on Super-Operator Norms Induced by Schatten Norms" (arXiv version) by John Watrous. Therein, Watrous gives a counter-example to what I thought to be true; (after slightly simplifying it) it reads as follows:

Define \begin{align*} \Phi:\mathbb C^{2\times 2}&\to\mathbb C^{2\times 2}\\ \begin{pmatrix}x_{11}&x_{12}\\x_{21}&x_{22}\end{pmatrix}&\mapsto \begin{pmatrix}x_{11}-x_{22}&0\\0&x_{12}+x_{21}\end{pmatrix}\,. \end{align*} This map is obviously Hermiticity-preserving and satisfies $$ \boxed{\|\Phi\|_{1\to 1}\geq2>\sqrt2=\sup_{v\in\mathbb C^2,\|v\|=1}\|\Phi(vv^*)\|_1} $$ as we shall see now. For the first inequality consider $$ X=\frac12\begin{pmatrix}1&i\\i&-1\end{pmatrix}=\frac1{\sqrt2}\begin{pmatrix}1\\i\end{pmatrix}\frac1{\sqrt2}\begin{pmatrix}1&i\end{pmatrix} $$ which due to the decomposition of $X$ satisfies $\|X\|_1=1$. Most notably, $X$ is not normal! Thus $$ \|\Phi\|_{1\to 1}\geq\|\Phi(X)\|_1=\Big\| \begin{pmatrix} 1&0\\0&i\end{pmatrix} \Big\|_1=2 $$ (actually this is an equality but that's beyond what we need here).

In the second step we want to see that $\|\Phi(vv^*)\|_1$ cannot exceed $\sqrt2$ for any unit vector $v\in\mathbb C^2$. For this represent $v$ as $(re^{i\alpha},\sqrt{1-r^2}e^{i\beta})^\top$ with $r\in[0,1]$, $\alpha,\beta\in\mathbb R$. Then \begin{align*} \sup_{v\in\mathbb C^2,\|v\|=1}\|\Phi(vv^*)\|_1&=\sup_{r\in[0,1]}\sup_{\alpha,\beta\in\mathbb R}\Big\|\Phi\begin{pmatrix} r^2& r\sqrt{1-r^2}e^{i(\alpha-\beta)} \\ r\sqrt{1-r^2}e^{-i(\alpha-\beta)} &1-r^2 \end{pmatrix}\Big\|_1\\ &=\sup_{r\in[0,1]}\sup_{\alpha,\beta\in\mathbb R}\big( |2r^2-1|+2r\sqrt{1-r^2}|\cos(\alpha-\beta)| \big)\\ &=\sup_{r\in[0,1]}\big( |2r^2-1|+2r\sqrt{1-r^2} \big)\,. \end{align*} It is straightforward to see (e.g., via basic calculus) that the maximum value the map $[0,1]\to\mathbb R$, $r\mapsto|2r^2-1|+2r\sqrt{1-r^2}$ outputs is given by $\sqrt2$ (at $r=\frac12\sqrt{2\pm\sqrt2}$) which shows $\sup_{v\in\mathbb C^2,\|v\|=1}\|\Phi(vv^*)\|_1=\sqrt2$ as desired.

NB: Something curious I noticed while going through this counterexample was that it is of "complex nature" in the sense that $$ \require{enclose} \enclose{horizontalstrike}{\sup_{u,v\in\boxed{\scriptsize\mathbb R^2},\|u\|=\|v\|=1}\|\Phi(uv^*)\|_1=\sqrt2=\sup_{v\in\mathbb C^2,\|v\|=1}\|\Phi(vv^*)\|_1\,;} $$ in other words $\require{enclose} \enclose{horizontalstrike}{\|\Phi|_{\mathbb R^2}\|_{1\to 1}=\sqrt2<2=\|\Phi\|_{1\to 1}}$. However, I currently cannot tell whether this is by chance or whether this observation has any significance to it. It appears this is by chance --- random search yielded an example of a map $\Phi$ where $$ \sup_{v\in\mathbb C^2,\|v\|=1}\|\Phi(vv^*)\|_1=\sup_{v\in\mathbb R^2,\|v\|=1}\|\Phi(vv^*)\|_1\approx 1.72 $$ but $$ \sup_{u,v\in\mathbb C^2,\|u\|=\|v\|=1}\|\Phi(uv^*)\|_1=\sup_{u,v\in\mathbb R^2,\|u\|=\|v\|=1}\|\Phi(uv^*)\|_1\approx 1.84 $$

Either way the reason I noticed this is that the Mathematica code I shared in my question only works with real vectors $u,v$ which is why the output operator norm was $\sqrt2$ (although we know that it should be $2$). Here is the "fixed" code I use to estimate the $1\to 1$ operator norm now:

(*Initialization*)
f[X_] := {{X[[1, 1]] - X[[2, 2]], 0}, {0, X[[1, 2]] + X[[2, 1]]}};
d = 2;(*dimension of domain of f*)
n = 0;
c = 0;
nmax = 5000;
cnew = 0;
A = IdentityMatrix[d]; B = IdentityMatrix[d]; u = UnitVector[d, 1]; v = UnitVector[d, 1]; u0 = UnitVector[d, 1]; v0 = UnitVector[d, 1]; RND = Transpose[{u}] . {v};
(*loop*)
While[n < 5*nmax, RND = ResourceFunction["RandomUnitVector"][2*d, 2];
 For[
alpha = 1, alpha <= d, alpha++, u[[alpha]] = RND[[1, alpha]] +I*RND[[1, d + alpha]]; v[[alpha]] = RND[[2, alpha]] + I*RND[[2,d + alpha]];
];
 A = ConjugateTranspose[{u}] . {v};
 cnew = Total[SingularValueList[f[A]]];
 If[cnew > c, B = A; u0 = u; v0 = v; c = Max[c, cnew]; PrintTemporary[c]; n = 0];
 n++]
u0 // MatrixForm
v0 // MatrixForm
B // MatrixForm
f[B] // MatrixForm
SingularValueList[f[B]]
Total[SingularValueList[f[B]]]