To smooth stock price data, I am currently looking to apply the heat equation $$ u_\tau - \kappa u_{xx}=0 $$ with Neumann boundary conditions $$ \frac{d^2u}{dx^2}(x_{min}+\Delta x, \tau)=0 \\ \frac{d^2u}{dx^2}(x_{max}-\Delta x, \tau)=0 $$ where $\Delta x = \frac{x_{max}-x_{min}}{N}$ and $N$ is the number of prices that are available for the stock in question.
Using central difference approximation $$ \frac{d^2u}{dx^2}(x_{min} + \Delta x, \tau_{k+1}) = \frac{d^2u}{dx^2}(x_2 + \Delta x, \tau_{k+1}) = \frac{u_{1,k+1} - 2u_{2, k+1} + u_{3, k+1}}{h^2} + O(h^2) $$ and setting it equal to $0$ gives $$ u_{1,k+1}-2u_{2,k+1}+u_{3,k+1} = 0 \rightarrow u_{1,k+1}=2u_{2,k+1}-u_{3,k+1} $$
Letting $\hat{l}$, $\hat{d}$, and $\hat{u}$ be the vectors corresponding to the lower diagonal, diagonal, and upper diagonal entries of $N$-by-$N$ matrix $$ A_{Impact} = \begin{bmatrix} 1+2\rho & -\rho & 0 & 0 & 0 & \cdots & 0 \\ -\rho & 1+2\rho & -\rho & 0 &0 & \cdots & 0 \\ 0 & -\rho & 1+2\rho & -\rho & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & -\rho & 1+2\rho & -\rho & 0 \\ 0 & \cdots & 0 & 0 & -\rho & 1+2\rho & -\rho \\ 0 & \cdots & 0 & 0 & 0 & -\rho & 1+2\rho \end{bmatrix} $$ respectively where $\rho = \frac{\kappa\Delta\tau}{\Delta x^2}$. Also, let the difference equation for j=x be $$ \hat{l}_{x,k+1}u_{1,k+1} + \hat{d}_{x,k+1}u_{2,k+1} + \hat{u}_{x,k+1}u_{3,k+1}=u_{x,k} $$ we can substitute the $u_{1,k+1}$ found above into the difference equation for $j=2$ for the left boundary $$ \begin{align*} \hat{l}_{2,k+1}u_{1,k+1} + \hat{d}_{2,k+1}u_{2,k+1} + \hat{u}_{2,k+1}u_{3,k+1}=u_{2,k} \\ \hat{l}_{2,k+1}u_{1,k+1} + \hat{d}_{2,k+1}u_{2,k+1} + \hat{u}_{2,k+1}u_{3,k+1}=u_{2,k} \\ \hat{l}_{2,k+1}\left(2u_{2,k+1}-u_{3,k+1}\right) + \hat{d}_{2,k+1}u_{2,k+1} + \hat{u}_{2,k+1}u_{3,k+1}=u_{2,k}\\ \left(\hat{d}_{2,k+1}-2\hat{l}_{2,k+1}\right)u_{2,k+1} + \left(\hat{u}_{2,k+1}+\hat{l}_{2,k+1}\right)u_{3,k+1}=u_{2,k} \end{align*} $$
Through similar methodology, we can also find the right boundary from central difference approximation giving
$$
u_{N+1,k+1} = 2u_{N,k+1} - u_{N-1,k+1}
$$
and substituting into the difference equation for $j=N$
$$
\left(\hat{l}_{N,k+1}-\hat{u}_{N,k+1}\right)u_{N-1,k+1} + \left(\hat{d}_{N,k+1}+2\hat{u}_{N,k+1}\right)u_{N,k+1}=u_{N,k}
$$
This leaves me to conclude that the Neumann boundary conditions require
$$
\begin{align*}
\hat{d}_{0} = \hat{d}_{2,k+1}-2\hat{l}_{2,k+1} = (1+2\rho)-2(-\rho) = 1+4\rho \\
\hat{u}_{0} = \hat{u}_{2,k+1}+\hat{l}_{2,k+1} = (-\rho)+(-\rho) = -2\rho \\
\hat{d}_{N} = \hat{d}_{N,k+1}+2\hat{u}_{N,k+1} = (1+2\rho)+2(-\rho) = 1 \\
\hat{l}_{N-1} = \hat{l}_{N,k+1}-\hat{u}_{N,k+1} = (-\rho)-(-\rho) = 0
\end{align*}
$$
However, setting my diagonals with these values this leads the left ends of my smoothed curves to badly diverge negative, but I only think the right end doesn't diverge because it is really equal to the Dirichlet condition.

The code I am using is in this pastebin.
Your problems seem to be twofold. First, you need to understand how to derive the difference equations. I am going to let $x_0=x_{min}$, $x_j=x_0+j\Delta x$, $x_N=x_{max}$, $u_j=u(x_j,t)$, and $v_j=u(x_j,t+\Delta t)$. Expanding a Taylor polynomial about $x=x_0$, $$\begin{array}{ll}u_0&=u_0\\ u_1&=u_0+\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}+\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_2&=u_0+2\Delta x\frac{\partial u}{\partial x}+2(\Delta x)^2\frac{\partial^2u}{\partial x^2}+\frac43(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\end{array}$$ Where all the derivatives are to be evaluated at $x=x_0$. By the now clarified left boundary condition, $\left.\frac{\partial u}{\partial x}\right|_{x=x_0}=0$ and so $$8u_1-u_2-7u_0=2(\Delta x)^2\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^4\right)$$ So we get $$\frac{8u_1-u_2-7u_0}{2(\Delta x)^2}=\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^2\right)=\frac1{\kappa}\frac{\partial u}{\partial t}=\frac{v_0-u_0}{\kappa\Delta t}$$ Using the forward Euler method. Rewrite as $$v_0=\left(1-\frac72\rho\right)u_0+4\rho u_1-\frac12\rho u_2$$ Where $\rho=\frac{\kappa\Delta t}{(\Delta x)^2}$. So the first row of our matrix reads $A_{00}=1-\frac72\rho$, $A_{01}=4\rho$, $A_{02}=-\frac12\rho$, and $A_{0j}=0$ for $2\le j\le N$.
For interior points, expanding in a Taylor series about $x=x_j$, $$\begin{array}{ll}u_{j-1}&=u_j-\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}-\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_j&=u_j\\ u_{j+1}&=u_j+\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}+\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\end{array}$$ Where all derivatives are to be evaluated at $x=x_j$. Then $$u_{j-1}+u_{j+1}-2u_j=(\Delta x)^2\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^4\right)$$ So that $$\frac{u_{j-1}+u_{j+1}-2u_j}{(\Delta x)^2}=\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^2\right)=\frac1{\kappa}\frac{\partial u}{\partial t}=\frac{v_j-u_j}{\kappa\Delta t}$$ Using the forward Euler method. Rewriting: $$v_j=\rho u_{j-1}+\left(1-2\rho\right)u_j+\rho u_{j+1}$$ That means that for $1\le i\le N-1$, row $j$ reads $A_{j,j-1}=\rho$, $A_{jj}=1-2\rho$, $A_{j,j+1}=\rho$, and $A_{jk}=0$ for $k\notin\{j-1,j,j+1\}$.
EDIT: Another point which could be made here is that $$u_{j+1}-u_{j-1}=2\Delta x\frac{\partial u}{\partial x}+O\left((\Delta x)^3\right)$$ So $$\frac{u_{j+1}-u_{j-1}}{2\Delta x}=\frac{\partial u}{\partial x}+O\left((\Delta x)^2\right)$$ Which implies that fixing the first derivative at an interior point costs a degree of freedom in that its neighbor points are tied together: if we know the value of one, the other is given by the above equation.
Expanding in a Taylor polynomial about $x=x_N$, $$\begin{array}{ll}u_{N-2}&=u_N-2\Delta x\frac{\partial u}{\partial x}+2(\Delta x)^2\frac{\partial^2u}{\partial x^2}-\frac43(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_{N-1}&=u_N-\Delta x\frac{\partial u}{\partial x}+\frac12(\Delta x)^2\frac{\partial^2u}{\partial x^2}-\frac16(\Delta x)^3\frac{\partial^3u}{\partial x^3}+O\left((\Delta x)^4\right)\\ u_N&=u_N\end{array}$$ Hopefully we have also the right boundary condition that $\left.\frac{\partial u}{\partial x}\right|_{x=x_N}=0$ so that $$8u_{N-1}-u_{N-2}-7u_N=2(\Delta x)^2\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^4\right)$$ So now $$\frac{8u_{N-1}-u_{N-2}-7u_N}{2(\Delta x)^2}=\frac{\partial^2u}{\partial x^2}+O\left((\Delta x)^2\right)=\frac1{\kappa}\frac{\partial u}{\partial t}=\frac{v_N-u_N}{\kappa\Delta t}$$ Using the forward Euler method. Rewrite as: $$v_N=-\frac12\rho u_{N-2}+4\rho u_{N-1}+\left(1-\frac72\rho\right)u_N$$ So the last row of the matrix is $A_{N,N-2}=-\frac12\rho$, $A_{N,N-1}=4\rho$, $A_{NN}=1-\frac72\rho$, and $A_{Nj}=0$ for $0\le j\le N-3$.
So that's how we can construct the $A$ matrix using the differential equation and Neumann boundary conditions. At each time step, we simply multiply the old $\vec u$ vector by the $A$ matrix to get the updated $\vec v$ vector: $$\vec v=A\vec u$$ As for the second problem, read the Wikipedia page on von Neumann stability where lo and behold exactly your heat equation is used as an example. The conclusion of that example is that the forward Euler method won't work in this case unless $\rho\le\frac12$. Try it out for yourself: if $\rho$ is a little smaller than $\frac12$, then everything proceeds normally but if $\rho$ is just a little bigger than $\frac12$, then everything goes blooey.
EDIT: With the following heat.txt
The Matlab code heat.m
EDIT: Modified to take advantage of the almost tridiagonal nature of the system. Seeing what happened when I made little mistakes in my modification, I think that's what was happening in your python code. Von Neumann stability is not a problem for the backward Euler method like it is for the forward method. In fact, due to the different nature of the boundary difference equations this implementation of the forward Euler method is unstable even before $\rho=0.4$.
Also needed backward_step.m
Or if backward == false, forward_step.m
Produced the following smoothed output