Using eigenvectors for solving maximization problem

495 Views Asked by At

I am given the following problem:

$$A=\begin{bmatrix} 4 & 1 & 1 \\ 1 & 2 & 3 \\ 1 & 3 & 2 \end{bmatrix}$$

Find $$\max_x\frac{|(Ax,x)|}{(x,x)}$$, where $(.,.)$ is a dot product of vectors and the maximization is performed over all $x=\begin{bmatrix}x_1 & x_2 & x_3\end{bmatrix}^T \in \mathbb{R}^3$, such that $\displaystyle\sum_{i=1}^{3}x_i=0$

I have found the eigenvectors for $A$ and they happen to match the sum criterion:

$$E(\lambda_1)=\text{span}(\begin{bmatrix}-2 & 1 & 1\end{bmatrix}^T)$$ for $\lambda_1=3$ and $$E(\lambda_2)=\text{span}(\begin{bmatrix}0 & -1 & 1\end{bmatrix}^T)$$ for $\lambda_2=-1$.

(For $\lambda_3=6$ there are no eigenvectors).

Can the above eigenvectors and eigenvalues be used for solving this maximization problem?

2

There are 2 best solutions below

7
On

You wrote "for $\lambda_3=6$ there are no eigenvectors". This is not true !

We have, since $A$ is symmetric, that $\max_x\frac{|(Ax,x)|}{(x,x)}= \max \{\lambda : \lambda \in \sigma(A)\}$, where $\sigma(A)$ denotes the set of eigenvalues of $A$.

Hence $$\max_x\frac{|(Ax,x)|}{(x,x)}= 6.$$

0
On

Can the above eigenvectors and eigenvalues be used for solving this maximization problem?

For reference the form you're looking at is the Rayleigh Quotient

There is a theorem in there. Theorem 55.

$$ \textrm{ max } \frac{\langle x, Ax\rangle }{\| x\|^{2}} = \lambda_{n} \textrm{ the max eigenvalue of A, attained when } x =v_{n}$$

$$ \textrm{ min } \frac{\langle x, Ax\rangle }{\| x\|^{2}} = \lambda_{1} \textrm{ the min eigenvalue of A, attained when } x =v_{1}$$

As an important consequence in numerical calculations: the maximum eigenvalue of A can be found by solving a maximization problem, and the minimum eigenvalue - by a minimization problem.

I'm not going to do this by hand but I have Python.

import numpy as np

your_matrix = np.matrix([[4, 1,1], [1,2, 3],[1,3,2]])
w, v = np.linalg.eig(your_matrix)


def your_rayleigh(A, v,k):
    my_eigvec = v[:,k]
    a = np.transpose(my_eigvec)
    my_test = np.inner(a,a)
    top = np.dot(A, my_eigvec)
    top = np.dot(a,top)

    my_ratio  = top/my_test 
    return my_ratio


k1 = 0
k2 = 1
k3 = 2
test1 = your_rayleigh(your_matrix,v,k1)
test2 = your_rayleigh(your_matrix,v,k2)
test3 = your_rayleigh(your_matrix,v,k3)

my_eigenvalues = np.array([test1,test2,test3])

print(my_eigenvalues)
[[[ 3.]]

 [[ 6.]]

 [[-1.]]]

if you see there is a space of vectors and it obtains its max and min allow these eigenvectors achieving the eigenvalues. One of things about the eigendecomp in the computer is $\|x\| =1$ since they're orthogonal.