Uncertain how to set up the Neumann boundary conditions for Heat Equation Implicit FDM

1.4k Views Asked by At

To smooth stock price data, I am currently looking to apply the heat equation $$ u_\tau - \kappa u_{xx}=0 $$ with Neumann boundary conditions $$ \frac{d^2u}{dx^2}(x_{min}+\Delta x, \tau)=0 \\ \frac{d^2u}{dx^2}(x_{max}-\Delta x, \tau)=0 $$ where $\Delta x = \frac{x_{max}-x_{min}}{N}$ and $N$ is the number of prices that are available for the stock in question.

Using central difference approximation $$ \frac{d^2u}{dx^2}(x_{min} + \Delta x, \tau_{k+1}) = \frac{d^2u}{dx^2}(x_2 + \Delta x, \tau_{k+1}) = \frac{u_{1,k+1} - 2u_{2, k+1} + u_{3, k+1}}{h^2} + O(h^2) $$ and setting it equal to $0$ gives $$ u_{1,k+1}-2u_{2,k+1}+u_{3,k+1} = 0 \rightarrow u_{1,k+1}=2u_{2,k+1}-u_{3,k+1} $$

Letting $\hat{l}$, $\hat{d}$, and $\hat{u}$ be the vectors corresponding to the lower diagonal, diagonal, and upper diagonal entries of $N$-by-$N$ matrix $$ A_{Impact} = \begin{bmatrix} 1+2\rho & -\rho & 0 & 0 & 0 & \cdots & 0 \\ -\rho & 1+2\rho & -\rho & 0 &0 & \cdots & 0 \\ 0 & -\rho & 1+2\rho & -\rho & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & -\rho & 1+2\rho & -\rho & 0 \\ 0 & \cdots & 0 & 0 & -\rho & 1+2\rho & -\rho \\ 0 & \cdots & 0 & 0 & 0 & -\rho & 1+2\rho \end{bmatrix} $$ respectively where $\rho = \frac{\kappa\Delta\tau}{\Delta x^2}$. Also, let the difference equation for j=x be $$ \hat{l}_{x,k+1}u_{1,k+1} + \hat{d}_{x,k+1}u_{2,k+1} + \hat{u}_{x,k+1}u_{3,k+1}=u_{x,k} $$ we can substitute the $u_{1,k+1}$ found above into the difference equation for $j=2$ for the left boundary $$ \begin{align*} \hat{l}_{2,k+1}u_{1,k+1} + \hat{d}_{2,k+1}u_{2,k+1} + \hat{u}_{2,k+1}u_{3,k+1}=u_{2,k} \\ \hat{l}_{2,k+1}u_{1,k+1} + \hat{d}_{2,k+1}u_{2,k+1} + \hat{u}_{2,k+1}u_{3,k+1}=u_{2,k} \\ \hat{l}_{2,k+1}\left(2u_{2,k+1}-u_{3,k+1}\right) + \hat{d}_{2,k+1}u_{2,k+1} + \hat{u}_{2,k+1}u_{3,k+1}=u_{2,k}\\ \left(\hat{d}_{2,k+1}-2\hat{l}_{2,k+1}\right)u_{2,k+1} + \left(\hat{u}_{2,k+1}+\hat{l}_{2,k+1}\right)u_{3,k+1}=u_{2,k} \end{align*} $$

Through similar methodology, we can also find the right boundary from central difference approximation giving $$ u_{N+1,k+1} = 2u_{N,k+1} - u_{N-1,k+1} $$ and substituting into the difference equation for $j=N$ $$ \left(\hat{l}_{N,k+1}-\hat{u}_{N,k+1}\right)u_{N-1,k+1} + \left(\hat{d}_{N,k+1}+2\hat{u}_{N,k+1}\right)u_{N,k+1}=u_{N,k} $$ This leaves me to conclude that the Neumann boundary conditions require $$ \begin{align*} \hat{d}_{0} = \hat{d}_{2,k+1}-2\hat{l}_{2,k+1} = (1+2\rho)-2(-\rho) = 1+4\rho \\ \hat{u}_{0} = \hat{u}_{2,k+1}+\hat{l}_{2,k+1} = (-\rho)+(-\rho) = -2\rho \\ \hat{d}_{N} = \hat{d}_{N,k+1}+2\hat{u}_{N,k+1} = (1+2\rho)+2(-\rho) = 1 \\ \hat{l}_{N-1} = \hat{l}_{N,k+1}-\hat{u}_{N,k+1} = (-\rho)-(-\rho) = 0 \end{align*} $$ However, setting my diagonals with these values this leads the left ends of my smoothed curves to badly diverge negative, but I only think the right end doesn't diverge because it is really equal to the Dirichlet condition. enter image description here

The code I am using is in this pastebin.

1

There are 1 best solutions below

15
On BEST ANSWER

Your problems seem to be twofold. First, you need to understand how to derive the difference equations. I am going to let $x_0=x_{min}$, $x_j=x_0+j\Delta x$, $x_N=x_{max}$, $u_j=u(x_j,t)$, and $v_j=u(x_j,t+\Delta t)$. Expanding a Taylor polynomial about $x=x_0$, $$\begin{array}{ll}u_0&=u_0\\ u_1&=u_0+\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}+\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_2&=u_0+2\Delta x\frac{\partial u}{\partial x}+2(\Delta x)^2\frac{\partial^2u}{\partial x^2}+\frac43(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\end{array}$$ Where all the derivatives are to be evaluated at $x=x_0$. By the now clarified left boundary condition, $\left.\frac{\partial u}{\partial x}\right|_{x=x_0}=0$ and so $$8u_1-u_2-7u_0=2(\Delta x)^2\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^4\right)$$ So we get $$\frac{8u_1-u_2-7u_0}{2(\Delta x)^2}=\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^2\right)=\frac1{\kappa}\frac{\partial u}{\partial t}=\frac{v_0-u_0}{\kappa\Delta t}$$ Using the forward Euler method. Rewrite as $$v_0=\left(1-\frac72\rho\right)u_0+4\rho u_1-\frac12\rho u_2$$ Where $\rho=\frac{\kappa\Delta t}{(\Delta x)^2}$. So the first row of our matrix reads $A_{00}=1-\frac72\rho$, $A_{01}=4\rho$, $A_{02}=-\frac12\rho$, and $A_{0j}=0$ for $2\le j\le N$.

For interior points, expanding in a Taylor series about $x=x_j$, $$\begin{array}{ll}u_{j-1}&=u_j-\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}-\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_j&=u_j\\ u_{j+1}&=u_j+\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}+\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\end{array}$$ Where all derivatives are to be evaluated at $x=x_j$. Then $$u_{j-1}+u_{j+1}-2u_j=(\Delta x)^2\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^4\right)$$ So that $$\frac{u_{j-1}+u_{j+1}-2u_j}{(\Delta x)^2}=\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^2\right)=\frac1{\kappa}\frac{\partial u}{\partial t}=\frac{v_j-u_j}{\kappa\Delta t}$$ Using the forward Euler method. Rewriting: $$v_j=\rho u_{j-1}+\left(1-2\rho\right)u_j+\rho u_{j+1}$$ That means that for $1\le i\le N-1$, row $j$ reads $A_{j,j-1}=\rho$, $A_{jj}=1-2\rho$, $A_{j,j+1}=\rho$, and $A_{jk}=0$ for $k\notin\{j-1,j,j+1\}$.

EDIT: Another point which could be made here is that $$u_{j+1}-u_{j-1}=2\Delta x\frac{\partial u}{\partial x}+O\left((\Delta x)^3\right)$$ So $$\frac{u_{j+1}-u_{j-1}}{2\Delta x}=\frac{\partial u}{\partial x}+O\left((\Delta x)^2\right)$$ Which implies that fixing the first derivative at an interior point costs a degree of freedom in that its neighbor points are tied together: if we know the value of one, the other is given by the above equation.

Expanding in a Taylor polynomial about $x=x_N$, $$\begin{array}{ll}u_{N-2}&=u_N-2\Delta x\frac{\partial u}{\partial x}+2(\Delta x)^2\frac{\partial^2u}{\partial x^2}-\frac43(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_{N-1}&=u_N-\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}-\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_N&=u_N\end{array}$$ Hopefully we have also the right boundary condition that $\left.\frac{\partial u}{\partial x}\right|_{x=x_N}=0$ so that $$8u_{N-1}-u_{N-2}-7u_N=2(\Delta x)^2\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^4\right)$$ So now $$\frac{8u_{N-1}-u_{N-2}-7u_N}{2(\Delta x)^2}=\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^2\right)=\frac1{\kappa}\frac{\partial u}{\partial t}=\frac{v_N-u_N}{\kappa\Delta t}$$ Using the forward Euler method. Rewrite as: $$v_N=-\frac12\rho u_{N-2}+4\rho u_{N-1}+\left(1-\frac72\rho\right)u_N$$ So the last row of the matrix is $A_{N,N-2}=-\frac12\rho$, $A_{N,N-1}=4\rho$, $A_{NN}=1-\frac72\rho$, and $A_{Nj}=0$ for $0\le j\le N-3$.

So that's how we can construct the $A$ matrix using the differential equation and Neumann boundary conditions. At each time step, we simply multiply the old $\vec u$ vector by the $A$ matrix to get the updated $\vec v$ vector: $$\vec v=A\vec u$$ As for the second problem, read the Wikipedia page on von Neumann stability where lo and behold exactly your heat equation is used as an example. The conclusion of that example is that the forward Euler method won't work in this case unless $\rho\le\frac12$. Try it out for yourself: if $\rho$ is a little smaller than $\frac12$, then everything proceeds normally but if $\rho$ is just a little bigger than $\frac12$, then everything goes blooey.

EDIT: With the following heat.txt

0     187.179993
1     183.919998
2     185.399994
3     187.970001
4     190.580002
5     190.350006
6     187.880005
7     191.029999
8     191.330002
9     190.910004
10    191.449997
11    190.399994
12    191.880005
13    191.440002
14    191.610001
15    193.000000
16    194.820007
17    194.210007
18    190.979996
19    189.910004
20    190.289993
21    201.500000
22    207.389999
23    207.990005
24    209.070007
25    207.110001
26    207.250000
27    208.880005
28    207.529999
29    208.869995

The Matlab code heat.m
EDIT: Modified to take advantage of the almost tridiagonal nature of the system. Seeing what happened when I made little mistakes in my modification, I think that's what was happening in your python code. Von Neumann stability is not a problem for the backward Euler method like it is for the forward method. In fact, due to the different nature of the boundary difference equations this implementation of the forward Euler method is unstable even before $\rho=0.4$.

% heat.m

clear all;
close all;
%backward = false;
backward = true;
raw = importdata('heat.txt');
u = raw(:,2);
t = raw(:,1);
N = length(u)-1;
rho = 0.15625;
if backward,
    rho = -rho;
end
% Set up A matrix
lower = rho*ones(size(u)); % Lower diagonl
lower(1) = -1/2*rho; % A(1,2)
lower(N+1) = 4*rho; % A(N+1,N)
d = (1-2*rho)*ones(size(u)); % Principal diagonal
d(1) = 1-7/2*rho; % A(1,1)
d(N+1) = 1-7/2*rho; % A(N+1,N+1)
upper = rho*ones(size(u)); % Upper diagonal
upper(1) = 4*rho; % A(1,2)
upper(N+1) = -1/2*rho; % A(N+1,N-1);

plot(t,u)
title('Smoothing Exercise');
xlabel('t');
ylabel('u');
hold on;
for k = 1:20,
    if backward,
        u = backward_step(lower,d,upper,u);
    else
        u = forward_step(lower,d,upper,u);
    end
    plot(t,u);
end

Also needed backward_step.m

% backward_step.m

function x = backward_step(l,d,u,b);

% On entry l, d, and u represent an (N+1)X(N+1) almost tridiagonal matrix A
% l is the lower diagonal: l(i) = A(i,i-1) except l(1) = A(1,3)
% d is the principle diagonal: d(i) = A(i,i)
% u = the upper diagonal: u(i) = A(i,i+1) except u(N+1) = A(N+1,N-1)

% On exit, x is the solution to A*x=b

% Make necessary copies -- not necessary for Matlab, which passes by value
N = length(b)-1;
dD = d;
x = b;
temp_1 = u(1);
temp_N = l(N+1);

% Perform an upwards step of Gaussian elimination:
temp = l(1)/u(2);
dD(1) = dD(1)-temp*l(2);
u(1) = u(1)-temp*dD(2); % Why we had to save u(1)
x(1) = x(1)-temp*x(2);

% Perform a downwards step of Gaussian elimination:
temp = u(N+1)/l(N);
l(N+1) = l(N+1)-temp*dD(N); % Why we had to save l(N+1)
dD(N+1) = dD(N+1)-temp*u(N);
x(N+1) = x(N+1)-temp*x(N);

% The two off-tridiagonal entries are eliminated, so the matrix is now
% tridiagonal. Sweep down:

for i = 2:N+1,
    temp = l(i)/dD(i-1);
    dD(i) = dD(i)-temp*u(i-1);
    x(i) = x(i)-temp*x(i-1);
end

% Now to sweep up:

x(N+1) = x(N+1)/dD(N+1);
for i = N:-1:1,
    x(i) = (x(i)-u(i)*x(i+1))/dD(i);
end

% Restore the two saved elements:

u(1) = temp_1;
l(N+1) = temp_N;

Or if backward == false, forward_step.m

% forward_step.m

function x = forward_step(l,d,u,b);

% On entry l, d, and u represent an (N+1)X(N+1) almost tridiagonal matrix A
% l is the lower diagonal: l(i) = A(i,i-1) except l(1) = A(1,3)
% d is the principle diagonal: d(i) = A(i,i)
% u = the upper diagonal: u(i) = A(i,i+1) except u(N+1) = A(N+1,N-1)

% On exit, x = A*b

N = length(b)-1;
x = zeros(size(b)); % Make solution vector of the right shape

% First row is different
x(1) = d(1)*b(1)+u(1)*b(2)+l(1)*b(3);

% Middle rows
for i = 2:N,
    x(i) = l(i)*b(i-1)+d(i)*b(i)+u(i)*b(i+1);
end

% Last row is also different
x(N+1) = u(N+1)*b(N-1)+l(N+1)*b(N)+d(N+1)*b(N+1);

Produced the following smoothed output heat1.png