I have seen it claimed in a couple of places that the classical Lanczos iteration (without shifts or being able to apply the inverse or anything fancy) yields good approximations to both the smallest and largest eigenvalue of a large, sparse, self-adjoint linear operator, but this is very surprising to me. As a Krylov subspace method, I would expect it to be accurate on the large eigenvalues and inaccurate on the small ones, for the same reasons as the power method. Does anyone know how Lanczos accomplishes this? I have seen it work in practice, but can't figure out why.
2026-03-25 16:06:12.1774454772
How does the Lanczos iteration find small eigenvalues?
1.5k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in EIGENVALUES-EIGENVECTORS
- Stability of system of parameters $\kappa, \lambda$ when there is a zero eigenvalue
- Stability of stationary point $O(0,0)$ when eigenvalues are zero
- Show that this matrix is positive definite
- Is $A$ satisfying ${A^2} = - I$ similar to $\left[ {\begin{smallmatrix} 0&I \\ { - I}&0 \end{smallmatrix}} \right]$?
- Determining a $4\times4$ matrix knowing $3$ of its $4$ eigenvectors and eigenvalues
- Question on designing a state observer for discrete time system
- Evaluating a cubic at a matrix only knowing only the eigenvalues
- Eigenvalues of $A=vv^T$
- A minimal eigenvalue inequality for Positive Definite Matrix
- Construct real matrix for given complex eigenvalues and given complex eigenvectors where algebraic multiplicity < geometric multiplicity
Related Questions in NUMERICAL-LINEAR-ALGEBRA
- sources about SVD complexity
- Showing that the Jacobi method doesn't converge with $A=\begin{bmatrix}2 & \pm2\sqrt2 & 0 \\ \pm2\sqrt2&8&\pm2\sqrt2 \\ 0&\pm2\sqrt2&2 \end{bmatrix}$
- Finding $Ax=b$ iteratively using residuum vectors
- Pack two fractional values into a single integer while preserving a total order
- Use Gershgorin's theorem to show that a matrix is nonsingular
- Rate of convergence of Newton's method near a double root.
- Linear Algebra - Linear Combinations Question
- Proof of an error estimation/inequality for a linear $Ax=b$.
- How to find a set of $2k-1$ vectors such that each element of set is an element of $\mathcal{R}$ and any $k$ elements of set are linearly independent?
- Understanding iterative methods for solving $Ax=b$ and why they are iterative
Related Questions in SPARSE-MATRICES
- How does minimizing the rank of a matrix help us impute missing values in it?
- How can I approximately solve a 2-player zero-sum game by subselecting its rows/columns?
- Definition of sparsity
- Blackbox Methods for Rank Deficient Least Squares
- What is the most efficient method to compute recuversively (sparse) matrix power?
- Matrix Sparsity Pattern
- Constant weight vectors and their linear independence
- Solving saddle-point matrix with projection when Schur's complement doesn't exist
- Sparse matrix computational difficulties
- Regularity of zeros in a sparse matrix
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Unsurprisingly, Golub and van Loan (3rd ed.) had an answer. In chapter 9, they present this justification:
Consider an initial set of vectors $\{q_1,...,q_{k-1}\}$, and consider the optimal vector to add to this set to minimize $$ \min_{x\in\text{span}\{q_1,...,q_k\} - 0} r(x) $$ where $r$ is the quotient $$ \frac{x^TAx}{x^Tx} .$$ Let $u$ be the vector in the span of $q_1,...,q_{k-1}$ that minimizes $r$, and consider the gradient of $r$ at $u$, which is given by $$ \frac{2}{u^Tu} \left(Au - r(u)u\right).$$ If this vector is included in the span of $q_1,...,q_k$, then we have a guarantee that the minimum of $r$ on this subspace will decrease, and the decrease is "locally optimal" in a sense. Since the gradient lies in the next Krylov subspace, this heuristic motivates the use of Krylov subspaces to find the smallest eigenvalues rather than just the largest ones.
The preceding argument is informal, in the sense that it is not clear that the "local optimality" condition is good enough, but much more rigorous arguments are presented subsequently in Golub and van Loan, and ultimately a precise convergence analysis is possible.
There may be other, more complete and/or intuitive arguments people know, and I would be very interested to see them if that is the case.