Can someone explain why we define adjoint?

272 Views Asked by At

Recently I've been reviewing linear algebra. The definition of adjoint of a linear map on an inner product space seems not really natrual. Looks like people use this to define normal and self-adjoint and prove the spectral theorem. But from the result of the spectral theorem (diagonolizability wrt an orthogonal basis) I can see nothing to do with adjoint. Why do we define adjoint and what makes it important?

1

There are 1 best solutions below

2
On

Adjoints are useful in many cases. I will just give one, very important example: solving linear equations.

For instance, Consider a general bounded operator $L$ from a Hilbert-space $H_1$ to another one $H_2$ and let $\langle\cdot,\cdot\rangle_{H_1}$ and $\langle\cdot,\cdot\rangle_{H_2}$ be the associated inner-products. Then, one has

$$\langle Lu,v\rangle_{H_2}=\langle u,L^*v\rangle_{H_1}$$

for all $u\in H_1$ and $v\in H_2$, and where $L^*$ is the adjoint of $L$.

Now, assume that we would like to solve $Lx=y$. Assuming that $LL^*$ is invertible, a solution is given by $x=L^*(LL^*)^{-1}y$.

In the finite-dimensional case, that is, $H_1=\mathbb{R}^n$ and $H_2=\mathbb{R}^m$, then the operator $L$ can be represented as a matrix $M$ and we have $y=Mx$. The solution of which is given by $x=M^*(MM^*)^{-1}y$ where we have assumed that $MM^*$ is invertible (equivalently, $M$ is full row rank). In fact, the expression $M^*(MM^*)^{-1}$ is nothing else but the Moore-Penrose pseudoinverse of the matrix $M$.