## No, that's not a Gaussian filter

I recently got a question from a reader regarding Gaussian filtering, in which he says:

I have seen some codes use 3x3 Gaussian kernel like

`h1 = [1, 2, 1]/4`

to do the separate filtering.

The paper by Burt and Adelson (The Laplacian Pyramid as a Compact Image Code, IEEE Transactions on Communication, 31:532-540, 1983) seems to use 5x5 Gaussian kernel like

`h1 = [1/4 - a/2, 1/4, a, 1/4, 1/4-a/2]`

,and

`a`

is between 0.3-0.6. A typical value of`a`

may be 0.375, thus the Gaussian kernel is:

`h1 = [0.0625, 0.25, 0.375, 0.25, 0.0625]`

or

`h1 = [1, 4, 6, 4, 1]/16`

.

I have written previously about Gaussian filtering, but neither of those posts make it clear what a Gaussian filter kernel looks like.

## Window size

In a previous post I discussed how the size in pixels of the kernel is given by the chosen σ. Clearly, the Gaussian bell curve has a size given by the σ, and we need a sampling window large enough to contain the whole bell. With the convolution operation, we assume zero values outside the window. The Gaussian has long tails (in fact, it never reaches zero!), but it reaches values close to zero quite quickly. For example, at 3σ the Gaussian has a value of 0.0044, and if we assume it to be equal to zero after that, we ignore less than 0.3% of the total “weight” of the curve. Using a larger window to sample the same Gaussian therefore will not increase the precision by much, but using a smaller window quickly degrades the sampled Gaussian so that it doesn’t look like a Gaussian any more. This is why I always recommend to cut off at 3σ, though in some circumstances it might be better to increase that value. This is a graphical representation of the Gaussian cut off at various values of σ:

Note that the shape in the first two plots is very different to that of a Gaussian bell curve. It is the slow convergence to zero at the tails that give the Gaussian its desirable properties. I mention this here because it is not uncommon to find improperly implemented Gaussians. Because MATLAB’s Image Processing Toolbox implements the Gaussian with two parameters instead of one (i.e. the kernel size and the σ), users get confused and fill in incompatible values. This week I ran into a paper that applied

an 11x11 pixel Gaussian filter with standard deviation σ=25 pixels.

Yes, the authors in this paper used MATLAB to implement their algorithm. I have plotted such a filtering kernel (the one-dimensional version of it):

Does this look like a Gaussian to you? This was not in some obscure paper by some obscure scientists in some obscure university in some obscure country. This came from Duke University, ranked number 18 in the Times Higher Education’s World University Ranking. I mention this to reinforce how common these problems are.

## Integer approximations

Next, let’s look at the `[1, 2, 1]/4`

kernel that is sometimes referred to as a Gaussian kernel. Remember that outside
of this 3-pixel window, the kernel has a value of 0. This is thus equivalent to `[0, 1, 2, 1, 0]/4`

, a clearly
triangular shape. A kernel with triangular weights is the convolution of a uniform kernel with itself. You
can read all about the problems introduced by the uniform kernel. The triangular
kernel fixes some of those, but still is not nearly as good at suppressing high frequencies as the Gaussian kernel. If
you convolve the uniform kernel with itself many times, you approximate the Gaussian kernel. Thus, a triangular kernel
is a very rough approximation to the Gaussian kernel. Here is a plot of a continuous-domain triangular kernel and
samples corresponding to the `[1, 2, 1]/4`

kernel:

The `[1, 4, 6, 4, 1]/16`

kernel that the reader mentioned above asked about is a different approximation
to the Gaussian, using only small integer values. I plotted it here (black x), together with a Gaussian bell curve (σ=1)
and a sampled Gaussian (circles):

Back in the day, integer computations were much faster than floating-point operations, and thus there was a value in using this type of approximation. Nowadays, this might still hold in some hardware, but is certainly no longer true in PCs, GPUs or even smartphones. I recommend that you sample a true Gaussian and stick to floating-point–valued kernels.