I was in the process of implementing my own Gaussian filter and I thought it was the same thing as the Gaussian blur. However, wikipedia has two different articles on the topic.
To summarize, both methods involve a convolution with some form of a symmetrical Kernel.
using numpy and scipy.special
import numpy as np
import scipy.special as sp
The Gaussian blur integrates across each pixel so the kernel is calculated from the difference of error functions:
$G_E(n,\sigma) \propto \mathrm{erf}\left(\frac{n+0.5}{\sqrt{2} \sigma}\right) - \mathrm{erf}\left(\frac{n-0.5}{\sqrt{2} \sigma}\right)$
def ErrKernel(sig=1,sz=7):
K0 = sp.erf((np.arange(sz)-0.5-sz//2)/sig/np.sqrt(2))
K1 = sp.erf((np.arange(sz)+0.5-sz//2)/sig/np.sqrt(2))
return (K1-K0)/np.sum(K1-K0)
The Gaussian filter is defined as a form of scale space representation. The sampled Gaussian kernel is:
$G_S(n,\sigma) \propto \mathrm{exp}\left(- \frac{n^2}{2 \sigma^2} \right) $
def GaussKernel(sig=1,sz=7):
Kern = np.exp(-((np.arange(sz)-sz//2)**2)/2/sig/sig)
return Kern/sum(Kern)
And the "better" discrete Gaussian kernel is:
$G_D(n,\sigma) = \mathrm{exp}(-\sigma^2) I_n(\sigma^2)$
where $I_v(t)$ is the modified Bessel function of order $v$ evaluated at $t$.
def disGKernel(sig=1,sz=7):
t = sig*sig
Kern = np.exp(-t)*sp.iv(np.arange(sz)-sz//2,t)
return Kern/sum(Kern)
All three methods return slightly different weights. If you use a large sigma for the kernel function, they start to converge on each other.
I think I understand the purpose for using one kernel over another. The blur is just to blur the image according to a Gaussian distribution, the discrete and sampled Gaussian are for scale space decompositions, you want to average the signal to remove high frequency noise but you want to minimize the blurring as much as possible?
I guess I'm confused on the mathematics. Why is a discrete Gaussian better than an integrated Gaussian for scale space representation? How does the Bessel function drop out in the discrete Gaussian kernel? Any recommended literature for a novice? Thanks.