Trying to find the meaning of transpose matrices, and Hermitian

151 Views Asked by At

I've recently studied QM -and still do- where I came across the Hermitian and tried to find a mathematical and physical meaning to it and I knew that it has something to do with the transpose of a matrix $\bf{A}$ as such: $$\bf{A}= \bf{A^\dagger}=\bf{\overline{A^T}}$$ Here, I realized that I don't really know what transpose truly is and what geometrically & physically means.

Could anyone -kindly- answer?

2

There are 2 best solutions below

1
On

In the real setting, the transpose of an operator ${\bf A}$ is that operator which delivers the same scalar value as the bilinear form induced by ${\bf A}$, as in: ${\bf a} \cdot {\bf A} {\bf b} = {\bf A}^T {\bf a} \cdot {\bf b}$, for any two vectors ${\bf a}$ and ${\bf b}$ in $\mathbb{R}^3$.

There is plenty of geometric meaning in that which you might want to explore, and then find some examples from quantum mechanics for the complex case. It may also help to understand dual vector spaces for the real interpretation.

0
On

Lagrange introduced the concept when studying linear ODEs. For example, suppose you want to study $Lf$ where $$ Lf= af''+bf'+cf $$ Then you can use integration by parts to obtain the Lagrange identity $$ \int_a^b (Lf)g dx = \int_a^b f (L^{\dagger}g) dx + \mbox{evaluation terms} $$ where $$ L^{\dagger}g=(ag)''-(bg)'+cg $$ This formalism was exploited when studying "self adjoint" operators such as $$ Lf = (af')'+bf $$ There are a few changes when you start dealing with complex coeffients.

All of this equipped Sturm and Liouville to study second order self-adjoint differential equations with an eigenvalue parameter. The Lagrange identity $$ (Lf)g-f(L^{\dagger}g)= \frac{d}{dx}(\cdots) $$ was critical to everything. And it could be used for reduction of order in studying linear ODEs. The same formalism is still used in Quantum Mechanics, where self-adjoint operators naturally occur, and these can be 'diagonalized' with an eigenfunction basis.

The adjoint $A^*$ of an $n\times n$ complex matrix $A$ is similarly defined so that $$ \langle Af,g\rangle = \langle f,A^*g\rangle. $$ The adjunct operator involves only real coefficients and no conjugation, and is essentially just the transpose. The adjoint $A^*$ is conjugation and transpose.

Self-adjoint operators $A^{*}=A$ were found to be very useful when Sturm and Liouville initiated their study of self-adjoint operators arising out separation of variables techniques applied to the Heat Equation by Fourier, and later to other PDEs. Lagrange was the genius behind all of this study, at least so far as I am concerned. The idea of trying to find a basis of eigenfunctions of $A$ naturally arose in the context of separation of variables, leading to eigenfunction expansions and Fourier integral expansions parameterized by an