Using finite difference for a simple second order BVP

519 Views Asked by At

I'm trying to solve this BVP using the finite difference method $$u_{xx}(x) +sin(x) = 0$$ with $u(0)=u(2\pi)=0$. When I solve it on python, I get the correct curve shape I expect, a sinusoidal one, but the magnitude is much lower than what I expect. I'd expect a maximum and minimum of $1$ and $-1$, but they come out very small, like ~$0.04$.

Here's my code

import numpy as np
import math
import matplotlib.pyplot as plt

N = 100 #number of nodes
L = 2.0*math.pi #length of x-axis
x = np.linspace(0.0,L,N)
h = L/N #step length
u = np.zeros(N) #solution vector


u[0] = 0.0
u[N-1] = 0.0

#solve
for j in range(10):
    for i in range(1,N-1):
        u[i] = (u[i-1]+u[i+1]+h**2*math.sin(x[i]))/2.0

plt.plot(x,u)

and here's the curve The curve I get

So what causes the incorrect magnitude? I should get a regular $sin(x)$ curve

3

There are 3 best solutions below

2
On BEST ANSWER

So you haven't written down the discrete equation that you are trying to solve, which is a bit troubling. I think the discrete equation you want to use is probably

$$\frac{u_{n+1}-2u_n+u_{n-1}}{h^2}=-\sin(x_n)$$

for $n=1,2,\dots,N-1$ with $u_0=u_N=0$. This can be done with just a linear system solver easily enough, but if you want to use an iterative method, you can do, say, the Jacobi method. The iteration for the Jacobi method is

$$u_n^{(k+1)}=\frac{h^2 \sin(x_n) + u_{n-1}^{(k)} + u_{n+1}^{(k)}}{2}$$

which is not quite the same as what you implemented. What you did is actually Gauss-Seidel:

$$u_n^{(k+1)}=\frac{h^2 \sin(x_n) + u_{n-1}^{(k+1)} + u_{n+1}^{(k)}}{2}$$

which is usually a little bit faster than Jacobi.

But this will converge extremely slowly in this context. Indeed in the worst case the eigenvalues of the Jacobi iteration matrix with the largest magnitude are larger than 0.995 in absolute value, so Jacobi would need about $\frac{\log(\epsilon)}{\log(0.995)} \approx 200 \log(1/\epsilon)$ iterations to multiply the error by a factor of $\epsilon$. Gauss-Seidel is a little faster (I have forgotten exactly how much faster). The error that you committed in your initial guess is exactly the worst case scenario for Jacobi as well.

1
On

10 iterations is far too few for Jacobi method (or in general any linear method) to converge with any reasonable accuracy.

In fact, a quick experiment printing result every 200 iterations gives $$ \begin{array}{c||c|c} \text{iteration}&\text{max error }(+)&\text{max error }(-)\\\hline 200 & 0.461371 & -0.455683 \\\hline 400 & 0.223509 & -0.208337 \\\hline 600 & 0.116791 & -0.098056 \\\hline 800 & 0.068118 & -0.049773 \\\hline 1000 & 0.045499 & -0.029056 \\\hline 1200 & 0.034605 & -0.020533 \\\hline 1400 & 0.029090 & -0.017293 \\\hline 1600 & 0.026095 & -0.016330 \\\hline 1800 & 0.024342 & -0.016295 \\\hline 2000 & 0.023215 & -0.016601 \\\hline &\dots&\\\hline 10000 & 0.019570 & -0.019567 \\\hline &\dots&\\\hline 20000 & 0.019568 & -0.019568 \end{array} $$ which shows how slowly Jacobi converges.

0
On

The exact solution is $u(x)=\sin x$, so you have good reason to be suspicious of your result. As a side note, it is much better to use implicit methods to solve this kind of BVP: The numerical solution $u=(u_1, \cdots, u_n)$ is the solution of a tridiagonal linear system. However, your fixed point iteration should work... You have probably not used enough iterations.