Infinite dimensional trace vs finite dimensional trace

1.1k Views Asked by At

For a Hilbert space $H$, and a linear operator $T:H\to H$, I've seen the trace of $T$ defined as $\sum_k<Te_k,e_k>$ (where $\{e_k\}$ is any orthonormal basis) provided this is finite.

How does this generalize the finite dimensional case? I see how this would generalize for self-adjoint operators (or normal on complex hilbert spaces) since these operators are unitarily diagonalizable. But what about a general linear operator on a finite dimensional space?