Any "tricks" for computing matrix commutators?

113 Views Asked by At

I happen to need to find the commutator of various 2x2 and 3x3 matrices relatively often. It is particularily tedious, but even after much practice, I am not finding it getting significantly faster. Are you aware of any "tricks" or shortcuts I can take when computing matrix commutators (or recognising that two matrices commute by looking at them)?

EDIT thanks to comment, most of the matrices I deal with are hermititan ($H_{ij} = H_{ji}^*$)

Certainly two diagonal matrices will always commute, but I am not aware of many other properties.

4

There are 4 best solutions below

0
On

For the $2 \times 2$ case, what you need for the matrices $(a_{ij})$ and $(b_{ij})$ to commute is $$ \eqalign{a_{1,2} b_{2,1} &= a_{2,1} b_{1,2}\cr a_{11} b_{12} + a_{12} b_{22} &= a_{12} b_{11} + a_{22} b_{12}\cr a_{11} b_{21} + a_{21} b_{22} &= a_{21} b_{11} + a_{22} b_{21}\cr}$$

I don't think it can be made any simpler than that.

0
On

You need a basis and the structure coefficients of all commutators in the basis. In 2d one is using the Pauli basis $(1, \vec \sigma)$ with the commutator algebra $$ \left[ a +\vec b \cdot \vec \sigma , \ \ c +\vec d \cdot \vec \sigma \right] = \left[\vec b \cdot \vec \sigma , \ \ \vec d \cdot \sigma \right] = i \ (\vec b \times \vec c) \cdot \vec \sigma$$ They commute if the coefficient vectors are linearly dependent.

In order to determine the coefficients, use

$$2a = \text{tr}( a +\vec b \cdot \sigma ),\ \ 2 b_k = \text{tr}( \sigma_k \ ( a +\vec b \cdot \vec \sigma ) )$$

For products one can use antisymmetry, bilinearity and the expansion formula for any commutator algebra

$$ \left[ a b, c \right] = a \left[ b, c \right]+ \left[ a, c \right] b $$

For 3x3 matrices, one uses the basis of 1, a diagonal matrix of trace 0 and 7 hermitean matrices with trace 0, one set of Pauli block matrices on the diagonal and zeros elsewhere and two sets of nondiagonal Paulis distributed on 1,3 and 2,3 index groups. The structure table is of course more complicated.

Gell Mann basis of su(3)

Since all basis elements are trace 0, one can use the trace of products with the basis elements in order to determine the coefficients. Expansion formulas and replacing the basis commutators via table lookup can be automated via pattern replacing; program packages that once laid the fundaments of CAS's, hand knitted by the elementary particle physicists.

0
On

This is not a complete answer, but it does give a way of thinking about when a matrix commutes with a given matrix $A$ in many cases.

If $A$ is a $2\times 2$ matrix, then certainly $I_2$ (the $2\times 2$ identity matrix) commutes with $A$, and also $A$ commutes with $A$. So, any linear combination $c_1I_2+c_2A$ will commute with $A$. For 'most' $2 \times 2$ matrices, $A$, this will be all. (Note: The fact that a matrix satisfies its characteristic polynomial shows that in the $2\times 2$ case, you don't get anything new by using higher powers of $A$ which also commute with $A$.)

There will be some cases, for instance when $A = cI_2$, where more stuff commutes with $A$.

If $A$ is a $3\times 3$ matrix, we can use the same ideas to say that any linear combination of $I_3$, $A$, and $A^2$ will commute with $A$.

0
On

Assuming that commutator means $[A,B]=AB-BA.$

I write the matrices as sums of $E_{ij},$ i.e. all zero matrices except one entry $1$ at position $(i,j).$ This gives sums of commutators $$ [aE_{ij},bE_{kl}]. $$ Now I look for matching indices from left to right $\delta_{jk}$ and form right to left $-\delta_{li}$ and get $$ [aE_{ij},bE_{kl}]=ab\delta_{jk}-ab\delta_{li}. $$ Of course, I do only consider matches and do not really write down the delta. I just look for matching indices left to right and then the right to left.

This is a kind of algorithmic method that allows a simple performance without thinking or memorizig anything, and it is easy to check if an error occurred.