Understanding the Expression $tr\Big(\rho(X\otimes I)\Big)=\sum_{a,b,a',b'} \rho_{ab,a'b'}X_{a,a'}\delta_{b,b'}$

181 Views Asked by At

How do I make sense of $$ tr\Big(\rho(X\otimes I)\Big)=\sum_{a,b,a',b'} \rho_{ab,a'b'}X_{a,a'}\delta_{b,b'} $$ If this does not involve any tensor products it would be a simple elementwise matrix multiplication.

If I consider sample of smaller matrices I would be able to probabily verify this. But is there any way I can see through this expression for trace which involves tensor products ?


Original Context from my Reference

Refer to Page 18

part-trace

where $\rho^A$ and $\rho^{AB}$ are positive definite matrices and $X$ is hermitian.


$$(\rho^A X)_{ab}=\sum_{a'} \rho^A_{aa'}X_{a'b}\implies tr(\rho^A X)=\sum_{a}(\rho^A X)_{aa}=\sum_{aa'} \rho^A_{aa'}X_{a'a}$$

How does $tr(\rho^A X)=\sum_{aa'} \rho^A_{aa'}X_{aa'}$ instead of $\sum_{aa'} \rho^A_{aa'}X_{a'a}$ ?

2

There are 2 best solutions below

11
On BEST ANSWER

One trick that I often find to be helpful is to begin by seeing what happens when all matrices involved in the computation are expressed in the form of a tensor product. In this case, this would mean replacing $A \otimes B$. From there, we can extend our observation to the general case either by noting that the expression is linear or by expressing $\rho$ as a linear combination of matrices of the form $A \otimes B$. For instance, if we let $E_{ij}$ denote the matrix with a $1$ as its $i,j$ entry and zeros elsewhere, then we can always write $\rho = \sum_{i,j} E_{ij} \otimes \sigma_{ij}$, where $\sigma_{ij}$ is the $i,j$ "block-entry" of $\rho$.


For this particular problem, if we write $\rho = \sum_{i,j} E_{ij} \otimes \sigma_{ij}$, then we find that $$ \begin{align} \operatorname{tr}(\rho(X \otimes I)) & = \operatorname{tr}\left(\sum_{i,j} (E_{ij} \otimes \sigma_{ij})(X \otimes I)\right) \\ & = \sum_{i,j} \operatorname{tr}[(E_{ij} \otimes \sigma_{ij})(X \otimes I)] \\ & = \sum_{i,j} \operatorname{tr}[(E_{ij} X) \otimes \sigma_{ij}] \\ & = \sum_{i,j}\operatorname{tr}(E_{ij}X) \operatorname{tr}(\sigma_{ij}) = \sum_{i,j}x_{ji} \operatorname{tr}(\sigma_{ij}). \end{align} $$


I suspect that $\rho$ is indexed such that $$ \rho = \sum_{a,b,a',b'} \rho_{ab,a'b'} E_{ab}\otimes E_{a'b'}. $$ Equivalently, if the tensor product of matrices is expressed as a single matrix using the Kronecker product, then $\rho_{ab,a'b'}$ is the $b,b'$-entry of the "block-entry" $\sigma_{a,a'}$ referenced in my first paragraph.

With that established, my simplification above can be rewritten as $$ \begin{align} \operatorname{tr}(\rho(X \otimes I)) & = \sum_{a,a'}X_{a',a} \operatorname{tr}(\sigma_{a,a'}) \\ & = \sum_{a,a'}X_{a',a} \sum_{b,b'} \sigma_{a,a'}[b,b'] \delta_{b,b'} \\ & = \sum_{a,a'}X_{a',a} \sum_{b,b'} \rho_{ab,a'b'} \delta_{b,b'} = \sum_{a,b,a',b'}X_{a',a} \rho_{ab,a'b'} \delta_{b,b'}. \end{align} $$ We could also have arrived at this formula directly from the expansion $$ \rho = \sum_{a,b,a',b'} \rho_{ab,a'b'} E_{ab}\otimes E_{a'b'}. $$ Indeed, we have $$ \begin{align} \operatorname{tr}(\rho(X \otimes I)) & = \operatorname{tr}\left(\sum_{a,b,a',b'} (\rho_{ab,a'b'} E_{aa'} \otimes E_{bb'})(X \otimes I)\right) \\ & = \sum_{a,b,a',b'} \rho_{ab,a'b'} \operatorname{tr}[(E_{aa'}\otimes E_{bb'})(X \otimes I)] \\ & = \sum_{a,b,a',b'} \rho_{ab,a'b'} \operatorname{tr}(E_{aa'}X)\operatorname{tr}(E_{bb'}) = \sum_{a,b,a',b'} \rho_{ab,a'b'} X_{a',a} \delta_{b,b'}. \end{align} $$

0
On

Given a Hilbert space $V$ over $K$, the trace can be understood as the unique linear map that converts pure tensors to inner products, i.e.

$$ (1)\qquad \colon V⊗V^* ⟶ K \quad\text{satisfying}\quad (u⊗v) \hat{{}={}} (|u⟩⟨v|) = ⟨v|u⟩_V $$

Note that the natural inner product on a tensor product $U⊗V$ is induced by

$$(2)\qquad ⟨u⊗v ∣ u'⊗v'⟩_{U⊗V} = ⟨u∣u'⟩_U ⋅ ⟨v∣v'⟩_V$$

As a special case, the induced inner product on $ℝ^{m×n}\hat{{}={}}ℝ^m ⊗ ℝ^n$ is the Frobenius Inner Product.
In particular, we have that:

$$(3)\qquad (|u⊗v⟩⟨u'⊗v'|) = ⟨u'⊗v'∣u⊗v⟩ = ⟨u∣u'⟩⋅⟨v∣v'⟩= (|u'⟩⟨u|)⋅(|v'⟩⟨v|) $$

And by linearity, it follows that, if $A$ is $m×m$ and $B$ is $n×n$, then:

$$(4)\qquad (A⊗B) = (A)⋅(B)$$

Note that this is not a contradition to (1) because the tensor products mean different things:

  • In (1) the tensor product is between $V$ and $V^*$
  • In (2) the tensor product is between $ℝ^{m×m}$ and $ℝ^{n×n}$. But here the vector space should be $V=ℝ^{m×n}$, because $A⊗B$ encodes a linear map

$$ℝ^{m×n} → ℝ^{m×n}, X↦ AXB^ \qquad\text{since}\qquad AXB^ = (A⊗B)⋅X$$

This paradox is resolved by recognizing that

$$A ⊗ B = \Big(∑_{aa'} A_{aa'} |a⟩⟨a'| \Big)⊗\Big(∑_{bb'} B_{bb'} |b⟩⟨b'| \Big) \hat{{}={}} ∑_{aa'bb'}A_{aa'}B_{bb'}|a⊗b⟩⟨a'⊗b'|$$

Which is basically the isomorphism $(U⊗U^*)⊗(V⊗V^*) ≅ (U⊗V)⊗(U⊗V)^*$


Now back to the question: Write $ρ=∑_i A_i⊗B_i$ and use the mixed product property $(A⊗B)⋅(C⊗D) = AC⊗BD$, hence:

$$\begin{aligned} (5)\qquad (ρ(X⊗)) &= \Big(∑_i (A_i⊗B_i)(X⊗)\Big) \\&= \Big(∑_i(A_iX⊗B_i)\Big) \\&= ∑_i (A_iX)⋅(B_i) \\&= ∑_i ⟨A_i∣X⟩_F⋅(B_i) \end{aligned}$$

For example, as Ben suggested one can decompose $ρ=∑_{=jk} E_{jk}⊗C_{jk}$ and get $∑_{jk} ⟨E_{jk}∣X⟩_F⋅tr(C_{jk}) = ∑_{jk} X_{jk}tr(C_{jk})$, akthough depending on the specific $ρ$ other decompositions may be more efficient.