I only want an intuitive proof or idea that underlines the essence of this equality. I proved this already using the summation but it doesn't help me to actually see why they are equal. I hope you could help me. Thanks in advance!
$\mathrm{tr}(AB)=\mathrm{tr}(BA)$ proof
712 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 6 best solutions below
On
One way to see this is to use the isomorphism $Hom(V, V) \cong V^* \otimes V$, given by
$$V^* \otimes V \ni \sum f_i \otimes e_i \mapsto A(x) = \sum f_i(x) e_i$$
Using this identification, the trace cam be defined like this:
$$tr : V^* \otimes V \rightarrow \mathbb R$$ $$tr(\sum f_i \otimes e_i)=\sum f_i(e_i)$$
Trace is actually a special case of tensor contraction. For a general tensor of type $V^* \otimes \dots \otimes V^*\otimes V \otimes \dots \otimes V$, we can define contraction with respect to one covariant index $i$ and one contravariant index $j$ as
$$\alpha^i_j(f_1\otimes \dots \otimes f_n \otimes e_1 \otimes \dots\otimes e_m) = f_i(e_j) \cdot (f_1 \otimes \dots \otimes f_{i-1} \otimes f_{i+1} \otimes e_{j-1} \otimes e_{j+1} \otimes \dots e_m)$$
And even with respect to a sequence of $k$ covariant and $k$ contravariant indices, for example
$$\alpha^{1,2}_{2,3}(f_1 \otimes f_2 \otimes e_1 \otimes e_2 \otimes e_3) = f_1(e_2)f_2(e_3)e_1$$
Trace is $\alpha^1_1$, and matrix multiplication is actually the contraction of tensor product:
$$A \leftrightarrow \sum f_i \otimes e_i$$ $$B \leftrightarrow \sum g_j \otimes d_j$$ $$A \cdot B \leftrightarrow \sum g_j(e_i) f_i\otimes d_j = \\ = \alpha^2_1(\sum f_i \otimes g_j \otimes e_i \otimes d_j) = \alpha^2_1(A\otimes B)$$
Now, take the trace of $AB$ and see that it is $$\sum f_i(d_j) g_j(e_i) = \alpha^{1,2}_{2,1}(A \otimes B)=\alpha^{2,1}_{1,2}(B \otimes A)$$
Finally, convince yourself that $\alpha^{1,2}_{2,1}=\alpha^{2,1}_{1,2}$.
On
Observe that if $A$ and $B$ are $n\times n$ matrices, $A=(a_{ij})$, and $B=(b_{ij})$, then $$(AB)_{ii} = \sum_{p=1}^n a_{ip}b_{pi},$$ so $$ \operatorname{Tr}(AB) = \sum_{j=1}^n\sum_{p=1}^n a_{jp}b_{pj}. $$ calculating the term $(BA)_{ii}$ and comparing both traces.
On
There are already several answers given, but here is my approach. The question asks for intuition behind the cyclic property of the trace of the product of square matrices. For example, $$\mathrm{tr}(ABC)=\mathrm{tr}(BCA)=\mathrm{tr}(CAB).$$ The key idea is to regard a square matrix as the adjacency matrix of a weighted directed graph. That is, $a_{i,j}$ is the weight of the directed edge $i\to j.$ The trace of such a matrix is the sum of the weights of the length zero loops of the graph.
The entries of the product of such matrices have an interpretation using directed walks where the weight of a directed walk is the product of the weights of the edges. Combining the two interpretations, the trace of the product of such matrices is the sum of the weights of all of the closed walks. Such a closed walk is a cycle and, because multiplication is cyclic commutative, the weight of a closed walk is the same for any cyclic product of such matrices.
On
The purpose of this answer is to provide intuition/motivation by 'discovering' the trace.
The $n x n$ matrices form a vector space so there are certainly many linear maps from this space to its scalar field. We want to focus on the linear maps that also 'respect' the multiplication of matrices in some fashion.
We are hoping that such a search can lead us to a unique concept/definition - a mapping that we will call the trace.
So here we are not looking for the trace of a product to be the product of traces. We want to weaken this. Now we know that multiplication of matrices is not commutative, but perhaps there are mappings that satisfy
$\tag 1 \mathrm{tr}(AB)=\mathrm{tr}(BA)$
Here is the fun part. Starting with If $A$ is any matrix
$\quad A = {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}}$
then (1) must hold for the orthogonal projection
$\quad B = {\displaystyle {\begin{bmatrix}1&0\\0&0\end{bmatrix}}}$
If you multiply out both $AB$ and $BA$, you see that the trace here is a function of only $a$. Similarly with
$\quad B = {\displaystyle {\begin{bmatrix}0&0\\0&1\end{bmatrix}}}$
the trace is a function of $d$.
If you continue this you will find the following is true,
We can characterize the trace completely: Let f be a linear functional on the space of square matrices satisfying $f(x y) = f(y x)$. Then f and tr are proportional.
On
You could start by seeing the trace as $$\tag1\text{Tr}(A)=\sum_{k=1}^n\lambda_k,$$ the sum of the eigenvalues counting multiplicities. From this point of view, since both the eigenvalues and their multiplicities are immune to conjugation, we obtain $$\tag2 \text{Tr}(BAB^{-1})=\text{Tr}(A) $$ for any invertible $B$. If we fix $B$ and apply $(2)$ to the matrix $AB$, we have $$\tag3 \text{Tr}(BA)=\text{Tr}(AB). $$ for any $A$, and any invertible $B$. As $\text{Tr}$ is continuous (see below) and invertible matrices are dense, $(3)$ holds for all $A,B$.
To see that $\text{Tr}$ is (linear and) continuous: going back to $(2)$ and looking at $A=UTU^{*}$ the Schur Decomposition, we get from $(1)$ that $$\tag4 \text{Tr}(A)=\text{Tr}(T)=\sum_{k=1}^n\langle Te_k,e_k\rangle=\sum_{k=1}^n\langle U^*AUe_k,e_k\rangle =\sum_{k=1}^n\langle AUe_n,Ue_n\rangle, $$ where $\{e_n\}$ is the canonical basis. This shows that $\text{Tr}$ is linear; thus continuous.
Regard $A$ and $B$ as $n^2$-dimensional vectors. Convince yourself that the trace of $AB$ is just the dot product of these vectors. Now the dot product is commutative, hence $$\mathrm{tr}(AB)=\langle A,B\rangle=\langle B,A\rangle=\mathrm{tr}(BA).$$