Understanding algorithm to show $(x+\partial_x)e^{-x^2/2}=0$ using python?

178 Views Asked by At

I am given the following Python code which is supposed to verify numerically that

$$(\partial_x + x)e^{-x^2/2}=0.$$

The algorithm does this, by transforming everything into a Fourier basis and then verifying it using the Fourier transform. However, I have some difficulties understanding the mathematical basis of this algorithm. In short, my question is:

When the final result is computed as np.dot( D, yl ), what is that actually in mathematical terms that we are computing here?

 import numpy as np

 ## non-normalized gaussian with sigma=1
 def gauss( x ):
return np.exp( -x**2 / 2 )

  ## interval on which the gaussian is evaluated
 L = 10
 ## number of sampling points
 N = 21
 ## sample rate
 dl = L / N
 ## highest frequency detectable
 kmax= 1 / ( 2 * dl )

 ## array of x values
 xl = np.linspace( -L/2, L/2, N )
 ## array of k values
 kl = np.linspace( -kmax, kmax, N )

 ## matrix of exponents
## the Fourier transform is defined via sum f * exp( -2 pi j k x)
  ## i.e. the 2 pi is in the exponent
 ## normalization is sqrt(N) where n is the number of sampling points
 ## this definition makes it forward-backward symmetric
 ## "outer" also exists in Matlab and basically does the same
exponent = np.outer( -1j * 2 * np.pi * kl, xl ) 
## linear operator for the standard Fourier transformation
 A = np.exp( exponent ) / np.sqrt( N )

 ## nth derivative is given via partial integration as  ( 2 pi j k)^n f(k)
 ## every row needs to be multiplied by the according k
 B = np.array( [ 1j * 2 * np.pi * kk * An for kk, An in zip( kl, A ) ] )

 ## for the part with the linear term, every column needs to be multiplied
## by the according x or--as here---every row is multiplied element 
 ## wise with the x-vector
C = np.array( [ xl * An for An in  A ] )

 ## thats the according linear operator
 D = B + C

 ## the gaussian
 yl = gauss( xl )

 ## the transformation with the linear operator
 print(  np.dot( D, yl ).round( decimals=9 ) ) 
## ...results in a zero-vector, as expected
1

There are 1 best solutions below

1
On BEST ANSWER

The line

print( np.dot( D, yl ).round( decimals=9 ) )

calculates the integrals of the Fourier transform at set of frequencies using the rectangle integration method: \begin{equation} F(k) = \int\limits^{\infty}_{-\infty}e^{-ikx}\left(\partial_x + x\right)e^{-\frac{x^2}{2}}dx=\left.e^{-ikx}e^{-\frac{x^2}{2}}\right\vert^{\infty}_{-\infty} +\int^{\infty}_{-\infty}(ik + x)e^{-ikx}e^{-\frac{x^2}{2}}dx=\int^{\infty}_{-\infty}(ik + x)e^{-ikx}e^{-\frac{x^2}{2}}dx\approx\int\limits^{5}_{-5}\left(ik + x\right)e^{-ikx}e^{-\frac{x^2}{2}}dx\approx\sum\limits^{N}_{j = 0}(ik + x_j)e^{-ikx_j}\Delta{x}e^{-\frac{x^2_j}{2}}=\sum\limits^{N}_{j = 0}v^{(k)}_jf_j=(\vec{v}^{(k)}, \vec{f}), \end{equation} where $\Delta{x}$ is the coordinate step, $v^{(k)}_j = (ik + x_j)e^{-ikx_j}\Delta{x}$, $f_j = e^{-\frac{x^2_j}{2}}$. Since we want to calculate the following ressult vector $\vec{r}$: $r_p = F(k_p)$, we can to introduce the matrix $D$: \begin{equation} D = \begin{pmatrix} (\vec{v}^{(k_0)})^{\intercal}\\ (\vec{v}^{(k_1)})^{\intercal}\\ \vdots\\ (\vec{v}^{(k_n)})^{\intercal} \end{pmatrix} \end{equation} or \begin{equation} D = \begin{pmatrix} v^{(k_0)}_0 & v^{(k_0)}_1 &\ldots & v^{(k_0)}_N\\ v^{(k_1)}_0 & v^{(k_1)}_1 & \ldots & v^{(k_1)}_N\\ & &\ldots &\\ v^{(k_n)}_0 & v^{(k_n)}_1 & \ldots & v^{(k_n)}_N \end{pmatrix}, \end{equation} so the vector $\vec{r}$ can be calculated by the following way \begin{equation} \vec{r} = D\vec{f}. \end{equation} The last equation corresponds to np.dot( D, yl ) in the code.

It is also important to note that according to the Kotelnikov-Nyquist–Shannon theorem, there will be no loss of information only if (spectral density) the highest frequency of the useful signal is equal to half or less than the sampling frequency. Half of the sampling frequency is called the Nyquist frequency:

kmax= 1 / ( 2 * dl )