Eigenvectors of the 1d Laplacian in python - Why do I get different answers

157 Views Asked by At

Consider this code

import numpy as np
from numpy import pi
from matplotlib import pyplot as plt
import scipy.linalg as linalg

## Domain
L = 5
M = 50
dx = L/(M-1)
xx = np.linspace(0, L, M)


## Diff matrix
row0 = np.zeros(M)
row0[[-1, 0, 1]] = [1, -2, 1]
row0 /= dx**2
D = linalg.circulant(row0)

# Dirichlet BCs -- implicitly set boundaries to 0
D = D[1:-1, 1:-1]

# Use negative laplacian
D *= -1


## Eigenvalue decomposition
ews, evs = linalg.eigh(D)

# ews, evs = linalg.eig(D)
# ews = ews.real


## Plot
fig, (ax1, ax2) = plt.subplots(nrows=2)
modulation = np.cos(pi/dx * xx[1:-1])  # [-1, +1, -1, +1, ...]
for ev in evs.T[:3]:
    ax1.plot(xx[1:-1], ev)
    ax2.plot(xx[1:-1], ev*modulation)
ax1.set_title("Unmodulated")
ax2.set_title("Modulated by [-1, +1, -1, +1, ...]")
plt.show(block=False)
fig.tight_layout()

with the following output

enter image description here

Now, if I don't do D *= -1 then the contents of the two panels are swapped (i.e. the modulation becomes necessary to recover the true eigenvectors). But theoretically, the eigenvectors should not have changed. So why do I get different answers?

Note1: both the modulated and unmodulated eigenvectors pass the validation of comparing ev to D @ ev. But the eigenvectors are supposed to be unique (up to a scaling), so what gives?

Note2: The issue seems related to aliasing, since the modulation is by a high-frequency signal. But I can't fathom what that has to do with eigenvalue problems.

Note3: If I uncomment the use of the (general) eig function, then it does not matter whether D *= -1 was done or not (the eigenvectors always require the modulation). Isn't the wide discrepancy of the output disconcerting for the "stability" of these routines? I mean, sure, as iterative methods, they depend on the initial guesses, but the problem seems well-posed and simple to me, but the output varies greatly.

Note4: this is not the same issue as in here, as I take care to transpose the eigenvector matrix.

Note5: by modulation I mean that the original signal/vector/function gets (pointwise) multiplied by the other, as seen in the code (ev*modulation)

1

There are 1 best solutions below

0
On BEST ANSWER

As pointed out in the comments, I had simply forgotten that I needed to sort the eigenvalues and vectors, whose order otherwise depends on the sign of the matrix operator and the choice of numerical routine.