I came across this in a textbook, and it is not the usual definition of generalized eigenvectors I've seen.
The generalised eigenvectors of matrices A and B are vectors that satisfy $Ax=\lambda Bx$ and $\lambda$ is the corresponding generalised eigenvalue.
The theorem goes:
If A and B are symmetric and B is a positive-definite matrix, the generalised eigenvalues ' are real and eigenvectors vi and vj with distinct eigenvalues are B-orthogonal $v_{i}^{T}Bv_{j} $ = 0.
Now my question is why does B have to be positive definite for this to work?
When trying to prove this, I used assume $\mu ,\lambda$ distinct eigenvalues corresponding to x, y. Then $\lambda\langle Bx,y\rangle = \langle Ax,y\rangle = \langle x,Ay\rangle$ (using symmetricity of A) = ... $\mu\langle Bx,y\rangle$ (using symmetricity of B). Then$ (\lambda - \mu)\langle Bx,y\rangle = 0$, where $ (\lambda - \mu) \neq 0 $
I feel like this prooves it without using B being positive definite? This textbook only deals with real vector spaces.
If $B$ is not positive definite, then the generalized eigenvalues may not be real, for example: if $A = \left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ and $B = \left(\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right)$, then since $B$ is invertible, the generalized eigenvalues are the eigenvalues of $B^{-1}A$, i.e. $\lambda = \pm i$.
Other things also go wrong -- when $B$ is positive definite, you'll get a basis of generalized eigenvectors, but this may not be true otherwise, e.g. if $A = \left(\begin{smallmatrix}1&1\\1&1\end{smallmatrix}\right)$ and $B = \left(\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right)$, the only generalized eigenvector is $v = \left(\begin{smallmatrix}1\\-1\end{smallmatrix}\right)$.