If $[A,A^*]=0$ ($A^*$ is a conjugate transpose of $A$), that is, $A$ is a normal matrix,
How is $A$ diagonalizable?
Or, this is just a definition of normal matrix?
If $[A,A^*]=0$ ($A^*$ is a conjugate transpose of $A$), that is, $A$ is a normal matrix,
How is $A$ diagonalizable?
Or, this is just a definition of normal matrix?
On
Normality is needed in order for $A$ to generate a commutative C*-algebra. Then standard C*-algebra theory asserts that there is an isomorphism between the unital C*-algebra generated by $A$ and the algebra of continuous functions on the spectrum of $A$, that is $$C^*(A,I)\cong C(\sigma(A)),$$ where $I$ is a unit for $A$. You can then think that what you have is a representation of said C*-algebra by multiplication operators $M_f$, with $f\in C(\sigma(A))$ on the Hilbert space $L^2(\sigma(A))$ with respect to a basic measure coming from a cyclic vector, i.e. $$(M_f\psi)(x) = f(x)\psi(x)$$ for any $f\in C(\sigma(A))$ and $\psi\in L^2(\sigma(A))$. Hence under this representation any elements of $C^*(A,I)$, including $A$ of course, act as a diagonal operator.
No, Normal by definition means commuting with its Hermitian conjugate (or adjoint, in the case of a nonfinite dimensional operator).
For finite dimensions one has the theorem that a matrix is normal if and only if unitarily diagonalizable.
To prove unitarily diagonalizable implies normal:
If linear map $A:\mathbb{C}^N\to\mathbb{C}^N$ has a matrix that is unitarily diagonalizable, then by definition we have $\mathbf{A}=\mathbf{U}\,\boldsymbol{\Lambda}\,\mathbf{U}^\dagger$ with $\mathbf{U}\,\mathbf{U}^\dagger=\mathrm{id}$, where $\mathbf{A}$ is the matrix of $A$ and $\boldsymbol{\Lambda}$ is diagonal. A straightforward calculation then shows that an entity of this kind commutes with its Hermitian conjugate and is therefore normal.
To prove normal implies unitarily diagonalizable:
To prove the other way, you need to (or at least this is the proof I recall) make use of one of the zillions of versions of the famous Shur's lemma that graze on the Great Mathematical Plains. The version I have in mind is that every complex square matrix is unitarily similar to an upper triangular matrix. An elementary proof of this version of the lemma runs as follows: a matrix $\mathbf{A}$ has at least one eigenvector $X$ - let's make it a unit vector $\hat{X}$ - with eigenvalue $\lambda$ and, by the Gram-Schmidt procedure, build an orthonormal matrix of the form $\mathbf{P}_X=[\hat{X}\,\hat{Y}_2\,\cdots\,\hat{Y}_N]$ with columns $\hat{X}$,$\hat{Y}_2\,\cdots\,\hat{Y}_N$. Now work out $\mathbf{P}_X^{-1}\,\mathbf{A}\,\mathbf{P}_X=\mathbf{P}_X^\dagger\,\mathbf{A}\,\mathbf{P}_X$ and you find it is a matrix of the form:
$$\left(\begin{array}{c|c}\lambda & \cdots\\\hline \mathbf{0}&\mathbf{A}_1\end{array}\right)$$
where $\mathbf{A}_1$ is an $N-1\times N-1$ matrix. So now repeat the procedure with the matrix $\mathbf{A}_1$ and iterate, each iteration leaving a lower right square matrix of dimension one fewer than the one from the former iteration. When we reach the lower right corner, we have an upper triangular matrix, similar to the original by the product of all the unitary transformations at each step.
So now, by dint of the lemma, write your general $N\times N$ matrix as:
$$\mathbf{A}=\mathbf{P}\,\tilde{\mathbf{A}}\,\mathbf{P}^\dagger$$
with $\tilde{\mathbf{A}}$ upper triangular and $\mathbf{P}$ unitary. Given the starting assumption that $\mathbf{A}$ commutes with $\mathbf{A}^\dagger$, you should now be able to show by a fairly straightforward calculation that $\mathbf{A}$ must in fact be diagonalizable by a unitary matrix.