Determine the eigen vectors by dividing a normal vector

134 Views Asked by At

I have a matrix

[[1, 2]
 [3, 4]]

I had to create a formula in python to determine the eigen values and eigen vectors of the matrix. Determining the eigen values was straight forward, but getting the vectors was tricky.


        diff = np.subtract(matrix, value * np.identity(2))
        vector = np.array(
            [-diff[0, 1], 
            diff[0,0]]
        )
        eigen = vector / np.linalg.norm(vector)

This implementation works when compared to numpy's numpy.linalg.eig function, however, I cannot find any explanation as to why this works.

Why do you make a vector from the -diff[0, 1] diff[0,0] ? Why not the reverse ? And why is the first term negative ?

Following that, why does dividing by the normal give the eigen vector ?

I have searched for solutions and formulas online but I havent been able to find anything relating to this, so any explanation would be grateful.

1

There are 1 best solutions below

0
On BEST ANSWER

I think your solution is valid for 2d matrix. but only for 2d.

For a 2d vector $v=[a_1,a_2]$, you can get a vector orthogonal to it by simply swap the elements and add a negative sign $v^\perp=[-a_2,a_1]$.

Your diff matrix is the $A-\lambda I$ matrix which shall map eigenvector $v$ to $0$ $$ (A-\lambda I)v=0 $$ Thus you just need to find a vector $v$ orthogonal to all the row vectors in $A-\lambda I$.

For 2d matrix since $rank(A-\lambda I)\leq 1$ so you only needs to consider 1 row and that's it.


BTW, if your matrix has a 2d eigenspace with the same eigenvalue then your method won't work, since $A-\lambda I=0$ and you cannot normalize.