$4\times4$ "Cofactor Transpose Matrix" calculation gone wrong!: Appendix A$.1$, $\mathit{\text{Shankar, R; Principles of Quantum Mechanics}}$

77 Views Asked by At

In Appendix A$.1$, Shankar, R; Principles of Quantum Mechanics, the cofactor transpose of a $3\times3$ matrix $M$ is given as (to be referred to as the first procedure by me)

$$\overline{M}=\begin{bmatrix}(a_R)_1 & (b_R)_1 & (c_R)_1\\(a_R)_2 & (b_R)_2 & (c_R)_2\\(a_R)_3 & (b_R)_3 & (c_R)_3\end{bmatrix}\tag{A.1.4}$$

where subscripts are used to denote the respective components of the reciprocal vectors $\vec{A_R}=\vec{B}\times\vec{C},\vec{B_R}=\vec{C}\times\vec{A},\vec{C_R}=\vec{A}\times\vec{B}$.

Now, $$M\cdot\overline{M}=\begin{bmatrix}A\cdot A_R& 0 & 0\\0 & B\cdot B_R & 0\\0 & 0 & C\cdot C_R\end{bmatrix}=\det{M}\begin{bmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{bmatrix}\tag{A.1.4,A.1.5}$$

So that, $$M^{-1}=\frac{\overline M}{\det M} \tag{A.1.7}$$ *holds no matter what order matrix M is.

What I want to do is to verify the rule for finding the cofactor transpose matrix(to be referred to as the second procedure by me) for $4\times4$ matrices.

I first verified it for $3\times3$ matrices using the above matrix $\overline{M}$. For your reference, this is how I reconciled it for the $3\times3$ case:

It is given that

$$M=\begin{bmatrix}\overset{+}{a_1}& \overset{-}{a_2} & \overset{+}{a_3}\\\overset{-}{b_1} & \overset{+}{b_2} & \overset{-}{b_3}\\\overset{+}{c_1} & \overset{-}{c_2} & \overset{+}{c_3}\end{bmatrix}\tag{A.1.1}$$ and constants to be multiplied in calculating cofactors are denoted by $c_{ij}=(-1)^{i+j}$, that are indicated at the top of each element in the above matrix.

Cofactor transpose matrix found from the first procedure is as follows:

$$\overline{M}=\begin{bmatrix}b_2c_3 - b_3c_2 & c_2a_3 - c_3a_2& a_2b_3 - a_3b_2\\b_3c_1 - b_1c_3 & c_3a_1 - c_1a_3& a_3b_1 - a_1b_3\\b_1c_2 - b_2c_1 & c_1a_2 - c_2a_1& a_1b_2 - a_2b_1\end{bmatrix}$$

and it is so that the elements match, after being calculated from both procedures. But this does not happen in the $\pmb{4\times4}$ matrix case. Why is it so?

The exercise that led me to my doubt:

For the ${4\times4}$ matrix case, I wrote

$$M=\begin{bmatrix}\overset{+}{(a_1)_1} & \overset{-}{(a_1)_2} & \overset{+}{(a_1)_3} & \overset{-}{(a_1)_4}\\\overset{-}{(a_2)_1} & \overset{+}{(a_2)_2} & \overset{-}{(a_2)_3} & \overset{+}{(a_2)_4}\\\overset{+}{(a_3)_1} & \overset{-}{(a_3)_2} & \overset{+}{(a_3)_3} & \overset{-}{(a_3)_4}\\\overset{-}{(a_4)_1} & \overset{+}{(a_4)_2} & \overset{-}{(a_4)_3} & \overset{+}{(a_4)_4}\end{bmatrix}$$

where the subscripts outside the parentheses are used to denote components and those inside to denote the vector.

It is provided with that $$\vec{A_{1R}}=\vec{A_2}\times\vec{A_3}\times\vec{A_4}$$ $$\vec{A_{2R}}=\vec{A_3}\times\vec{A_4}\times\vec{A_1}$$ $$\vec{A_{3R}}=\vec{A_4}\times\vec{A_1}\times\vec{A_2}$$ $$\vec{A_{4R}}=\vec{A_1}\times\vec{A_2}\times\vec{A_3}\tag{A.1.9}$$(cyclic permutations)

Just as earlier, above reciprocal vectors are used in creating Identity matrices out of matrix multiplication of $\pmb M$ and $\pmb {\overline M}$ inasmuch as they are perpendicular to all the vectors that come in their expression. $$\vec{A_{4R}}=\begin{bmatrix}i & j & k & l\\(a_1)_1 & (a_1)_2 & (a_1)_3 & (a_1)_4\\(a_2)_1 & (a_2)_2 & (a_2)_3 & (a_2)_4\\(a_3)_1 & (a_3)_2 & (a_3)_3 & (a_3)_4\end{bmatrix}\tag{A.1.8}$$ (and similarly other three above it can be written)which are what we will use in writing the cofactor matrix $\overline{M}$, just as we did in the $3\times 3$ case.

Finally again,

$$\overline M=\begin{bmatrix}(a_{1R})_1 & (a_{1R})_2 & (a_{1R})_3 & (a_{1R})_4\\(a_{2R})_1 & (a_{2R})_2 & (a_{2R})_3 & (a_{2R})_4\\(a_{3R})_1 & (a_{3R})_2 & (a_{3R})_3 & (a_{3R})_4\\(a_{4R})_1 & (a_{4R})_2 & (a_{4R})_3 & (a_{4R})_4\end{bmatrix}^T$$

and

$$M\cdot\overline{M}=\begin{bmatrix}A_1\cdot A_{1R}& 0 & 0 & 0\\0 & A_2\cdot A_{2R} & 0 & 0\\0 & 0 & A_3\cdot A_{3R} & 0\\0 & 0 & 0 & A_4\cdot A_{4R}\end{bmatrix}=\det{M}\begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{bmatrix}$$

As you see, juxtaposing cofactors of all the elements of the fourth row of $\overline M$ using both procedures shows that they aren't matching in sign. Neither do those of its third row.

For example, it seems that

$$(a_{4R})_1= \text{Cofactor thought from } \overline M \text{ of } (a_4)_1=\begin{vmatrix}(a_1)_2 & (a_1)_3 & (a_1)_4\\(a_2)_2 & (a_2)_3 & (a_2)_4\\(a_3)_2 & (a_3)_3 & (a_3)_4\end{vmatrix}$$

from the first procedure(since we aim at creating $I_{4\times4}$ matrix according to it), which is precisely the negative of that calculated from the second procedure.

Am I applying the first procedure incorrectly(Have I "over-generalised" it)? Should the cofactor transpose matrix $\overline M$ contain extra minuses to accommodate the quirk?

Thanking you in anticipation!