Finding second eigen vector using Power Method

305 Views Asked by At

I am trying to find the second eigen vector using the Power Method and I think I am going wrong somewhere with the normalization of values in the process. Here is my code I found for Power method.

def eigenvalue(A, v):
    Av = A.dot(v)
    return v.dot(Av)

def power_iteration(A):
    n, d = A.shape
    v = np.ones(d) / np.sqrt(d)
    ev = eigenvalue(A, v)

    while True:
        Av = A.dot(v)
        v_new = Av / np.linalg.norm(Av)
        ev_new = eigenvalue(A, v_new)
        if np.abs(ev - ev_new) < 0.0001:
            break

        v = v_new
        ev = ev_new

    return ev_new, v_new

The implementation is as follows:

import numpy as np
A =np.array([[1,4,5], [6,7,3],[8,9,3]])

## obtain eig values and eig vectors using SVD
from numpy import linalg as LA
w, v = LA.eig(A)
print(w)
print(v)

>>>[15.02797672 -3.8911694  -0.13680732]

>>>[[-0.41522987 -0.78917954  0.6074167 ]
 [-0.5740596   0.28489253 -0.68193416]
 [-0.7057193   0.54408814  0.40744418]]


# Do power method

lambdas,v = power_iteration(A)
print(lambdas)
print(v)

15.027995871437335
[0.41523392, 0.57405887, 0.70571751]

As you can see, the first eigen vector and value are accurate to that obtained from SVD. To compute the second eigen vector, I do the following:

$B = A - \lambda_1 v_1 v_1^T$ where $\lambda_1=15.02$ and $v_1 = [0.41523392, 0.57405887, 0.70571751]$.

The following code finds the next eigen value and vector

## find the next eigen vector
A_new = A - (lambdas*v*np.transpose(v))/(np.linalg.norm(v, ord=1)**2)
lambdas2,v2 = power_iteration(A_new)
print(lambdas2)
print(v2)

>>-14.432168780436967
>>[0.58703865 0.59846765 0.5451808 ]

As you can see, the second eigen vector is not correct.

I would be grateful for any suggestions. I still feel I am doing some mistake with $B = A - \lambda_1 v_1 v_1^T$. I am following the method here: powerMethod

1

There are 1 best solutions below

1
On BEST ANSWER

Your $A$ is not normal (I checked in Matlab), which means it is not orthogonally diagonalizable, which means that subtracting off $\lambda_1 \frac{v_1 v_1^T}{\| v_1 \|^2}$ will change the other eigenvalues/eigenvectors. This is straightforward to see: $ \left ( A-\lambda_1 \frac{v_1 v_1^T}{\| v_1 \|^2} \right )v_2 = \lambda_2 v_2 - \lambda_1 \frac{v_1^T v_2}{\| v_1 \|^2} v_1$ which is only equal to $\lambda_2 v_2$ if $v_1$ and $v_2$ are orthogonal or $\lambda_1=0$.

By the way, note that the article you linked to assumes that $A$ is symmetric all the way at the top. Symmetric matrices are orthogonally diagonalizable so that this method works.