Implicit Euler method yields incorrect output - in depth and simple

110 Views Asked by At

We are given the system of PDEs $\begin{pmatrix}f_t \\ g_t\end{pmatrix} = i\begin{pmatrix}0 & 1 \\ 1 & 0\end{pmatrix}\begin{pmatrix}f_{xx}\\ g_{xx}\end{pmatrix}+i\begin{pmatrix}3 & -1 \\ -1 & 3\end{pmatrix}\begin{pmatrix}f \\ g\end{pmatrix}$

Where $f(t,x) = f(t, x+1), g(t,x) = g(t,x+1)$ and $f(0, x) = \cos(2\pi x)$ and $g(0,x) = -i\sin(2\pi x)$

We wish to solve it using backwards Euler, also known as Euler's implicit method and compare it to the actual solution, as we change our space step, or number of points on the grid, from $16,32,64...$

What I did:

Firstly I found the actual solution. I have checked myself many times and it seems to me the solution is $\begin{pmatrix}f(t,x) \\ g(t,x)\end{pmatrix} = \begin{pmatrix}\frac{e^{-2\pi i x}\cdot e^{(2-4\pi^2)it}+e^{2\pi i x}\cdot e^{(4+4\pi^2)it}}{2}\\ \frac{e^{-2\pi i x}\cdot e^{(2-4\pi^2)it}-e^{2\pi i x}\cdot e^{(4+4\pi^2)it}}{2}\end{pmatrix}$

After that's done, I went on the perform the implicit Euler method. In this method, we say that on the grid points $(t_j, x_k)$ we have $f_t(t_j, x_k) =\frac{f(t_j,x_k)-f(t_{j-1},x_k)}{\Delta t}$ and the same for $g_t$ of course, and our approximation for the second derivative is $g_{xx}(t_j, x_k) = \frac{g(t_j,x_{k+1})-2g(t_j, x_k)+g(t_j,x_{k-1})}{\Delta x^2}$ and the same for $f_{xx}$ of course.

Thus, our first equation, $f_t = ig_{xx} + 3if-ig$ becomes

$\frac{f(t_j,x_k)-f(t_{j-1},x_k)}{\Delta t}=i\frac{g(t_j,x_{k+1})-2g(t_j, x_k)+g(t_j,x_{k-1})}{\Delta x^2} +3if(t_j,x_k)-ig(t_j,x_k)$

Isolating the $f(t_{j-1},x_k)$ we get:

$(1-3i\Delta t)f(t_j, x_k)-\frac{\Delta t}{\Delta x^2}ig(t_j,x_{k-1})+(\frac{2\Delta t}{\Delta x^2}i+i\Delta t)g(t_j,x_k)-\frac{\Delta t}{\Delta x^2}g(t_j,x_{k+1})=f(t_{j-1}, x_k)$

And of course, if we take into account the $g_t = if_{xx}+3ig-if$ we similarly get:

$(1-3i\Delta t)g(t_j, x_k)-\frac{\Delta t}{\Delta x^2}if(t_j,x_{k-1})+(\frac{2\Delta t}{\Delta x^2}i+i\Delta t)f(t_j,x_k)-\frac{\Delta t}{\Delta x^2}f(t_j,x_{k+1})=g(t_{j-1}, x_k)$

Hence, if for instance $0 \leq k \leq 3$, then this defines a system of $8$ equations with $8$ unknown variables:

enter image description here

Sorry, I don't know why it's inverted.

Since this is a linear system, it can be represented as $Ax = b$, so we can invert $A$ and solve for $f,g$ at time $t_j$. Thus this question really boils down to finding $A$.

I wrote a computer code in python that does exactly this.

import numpy as np

tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)

def solution_f(t, x):
    return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))

def solution_g(t, x):
    return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))

for l in range(2, 12):
    N = 2 ** l #number of grid points
    dx = 1.0 / N #space between grid points
    dx2 = dx * dx
    dt = dx #time step
    t_final = 1
    approximate_f = np.zeros((N, 1), dtype = np.complex)
    approximate_g = np.zeros((N, 1), dtype = np.complex)

    #Insert initial conditions
    for k in range(N):
        approximate_f[k, 0] = np.cos(tau * k * dx)
        approximate_g[k, 0] = -i * np.sin(tau * k * dx)

    #Create coefficient matrix
    A = np.zeros((2 * N, 2 * N), dtype = np.complex)

    #First row is special
    A[0, 0] = 1 -3*i*dt
    A[0, N] = ((2 * dt / dx2) + dt) * i
    A[0, N + 1] = (-dt / dx2) * i
    A[0, -1] = (-dt / dx2) * i

    #Last row is special
    A[N - 1, N - 1] = 1 - (3 * dt) * i
    A[N - 1, N] = (-dt / dx2) * i
    A[N - 1, -2] = (-dt / dx2) * i
    A[N - 1, -1] = ((2 * dt / dx2) + dt) * i

    #middle
    for k in range(1, N - 1):
        A[k, k] = 1 - (3 * dt) * i
        A[k, k + N - 1] = (-dt / dx2) * i
        A[k, k + N] = ((2 * dt / dx2) + dt) * i
        A[k, k + N + 1] = (-dt / dx2) * i

    #Bottom half
    A[N :, :N] = A[:N, N:]
    A[N:, N:] = A[:N, :N]

    Ainv = np.linalg.inv(A)

    #Advance through time
    time = 0
    while time < t_final:
        b = np.concatenate((approximate_f, approximate_g), axis = 0)
        x = np.dot(Ainv, b) #Solve Ax = b
        approximate_f = x[:N]
        approximate_g = x[N:]
        time += dt
    approximate_solution = np.concatenate((approximate_f, approximate_g), axis=0)

    #Calculate the actual solution
    actual_f = np.zeros((N, 1), dtype = np.complex)
    actual_g = np.zeros((N, 1), dtype = np.complex)
    for k in range(N):
        actual_f[k, 0] = solution_f(t_final, k * dx)
        actual_g[k, 0] = solution_g(t_final, k * dx)
    actual_solution = np.concatenate((actual_f, actual_g), axis = 0)

    print(np.sqrt(dx) * np.linalg.norm(actual_solution - approximate_solution))

It doesn't work well. It starts extremely slow, it shouldn't be like this. I can't for the life of me find the problem.