Recently I came across Normal matrices and their properties, one of which states that their eigenvectors are the same as their adjoint and are orthogonal. I've gone across some proofs and I understand it but when I tried to prove the same using the inner product, It somehow states that the above is true for any arbitrary matrix. Can someone possibly help me point out where I'm going wrong?
Let's say A is a matrix and B is its adjoint. If x is an eigenvector of A with eigenvalue k, then,
< x | A | x > = < x | kx > = k< x | x >
also
< x | A | x > = < Bx | x >
hence
< Bx | x > = k< x | x > = < kx | x >
So, x is also an eigenvector of adjoint of A.
When trying to determine why a proof is wrong, it can be very helpful to examine a counterexample step by step. Sometimes it is hard to find a counterexample, but in this case choosing almost any random matrix will suffice.
For example, consider $$A = \begin{pmatrix} -1 & 2 \\ 1 & 0 \end{pmatrix}, $$ which I chose by picking a random matrix with integer eigenvalues. Then a short computation shows its (right) eigenvectors $ (1, \tfrac{-1}{2})$ and $(1, 1)$, while the eigenvectors of the conjugate transpose are $(-1, 1)$ and $(1, 2)$.
This is a counterexample to the claim that the eigenvectors of any arbitrary (square) matrix are the same as those of the conjugate transpose of that matrix. I suggest going through OP's argument with this counterexample to see where the proof goes astray.
I'll also note that the OP argument implicitly claims something slightly stronger than the question title, which is that $A^*$ has the same eigenvectors with the same associated eigenvalues. The counterexample $$ B = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$$ has the nice property that $B$ and $B^*$ have both the same eigenvectors and the same eigenvalues, but the eigenvalue associated with each eigenvector swaps.
At the bottom, I note a couple of less important asides.
To typeset nice angle brackets, use
\langle x, y \rangle, which looks like $\langle x, y \rangle$.In my answer, I avoid the term adjoint because this is an overloaded term. What we sometimes call the "(classical) adjoint" or the "adjugate" or the "adjunct" matrix (written $\mathrm{adj}(A)$) is the transpose of the cofactor matrix of $A$. The matrix form of the adjoint operator is also called the "adjoint", and means the conjugate transpose matrix. Or in this answer, where everything is real-valued, it could mean just the transpose matrix. Instead of choosing a convention and sticking with it, I tend to avoid the word adjoint altogether unless spelling it out explicitly elsewhere.