Posts Tagged ‘separable’

Separable convolutions

Thursday, August 19th, 2010

The convolution is an important tool in image processing. All linear filters are convolutions (like, for example, the Gaussian filter). One of the reasons linear filters are so prevalent is that they imitate physical systems. For example, an analog electric circuit (containing resistors, capacitors and inductors) can be described by a convolution. The projection of a cell on a slide, through the microscope’s optical systems, onto the CCD sensor, can be described by a convolution. Even the sampling and averaging that occur in the CCD can be described by a convolution.

Two properties of the convolution are quite interesting when looking for an efficient implementation:

  • The convolution is a multiplication in the Fourier domain: f(x)⊗h(x) ⇒ F(ω)⋅H(ω) . This means that you can compute the convolution by applying the Fourier transform to both the image and the convolution kernel, multiplying the two results, then inverse transforming the result.
  • The convolution is associative: (fh1)⊗h2 = f⊗(h1h2) . This post is about the repercussions of this property.


Gaussian filtering with the Image Processing Toolbox

Tuesday, October 6th, 2009

Edit May 2018: Since publishing this post, the MATLAB Image Processing Toolbox has added the function imgaussfilt that correctly applies a Gaussian smoothing filter. For Gaussian derivatives, the recommendations here still apply.

If you don’t use DIPimage, you probably use MATLAB’s Image Processing Toolbox. This toolbox makes it really easy to do convolutions with a Gaussian in the wrong way. On three accounts. The function fspecial is used to create a convolution kernel for a Gaussian filter. This kernel is 2D. That’s the first problem. The other two problems are given by the default values of its parameters. The default value for the kernel size is [3 3]. The default value for the σ (sigma) is 0.5. (more…)

Gaussian filtering

Saturday, December 6th, 2008

In my recent lectures on filtering I was trying to convey only one thing to my students: do not use the uniform filter, use the Gaussian! The uniform (or “box”) filter is very easy to implement, and hence used often as a smoothing filter. But the uniform filter is a very poor choice for a smoothing filter, it simply does not suppress high frequencies strongly enough. And on top of that, it inverts some of the frequency bands that it is supposed to be suppressing (its Fourier transform has negative values). There really is no excuse ever to use a uniform filter, considering there is a very fine alternative that is very well behaved, perfectly isotropic, and separable: the Guassian. Sure, it’s not a perfect low-pass filter either, but it is as close as a spatial filter can get.

Because recently I found some (professionally written) code using Gaussian filtering in a rather awkward way, I realized even some seasoned image analysis professionals are not familiar and comfortable with Gaussian filtering. Hence this short tutorial.


DIPimage 2.0 released

Wednesday, November 19th, 2008

Last week the new release for DIPimage and DIPlib was made available at The change list is pretty substantial, though there should be no real compatibility concerns. One of the most important changes is that, for both Windows and Linux, some image processing functionality now can use multithreading to make best use of multi-processor and multi-core systems. For example, all separable filters will use all available cores by default. (more…)

The distance transform, erosion and separability

Thursday, November 6th, 2008

David Coeurjolly from the Université de Lyon just gave a presentation here at my department. He discussed, among other things, an algorithm that computes the Euclidean distance transform and is separable. The distance transform is an operation that takes a binary image as input, and writes in each object pixel the distance to the nearest background pixel. All sorts of approximations exist, using various distance measures that approximate the Euclidean distance. Using truly Euclidean distances is rather expensive. However, by making an algorithm that is separable, the computational cost is greatly reduced.

David’s algorithm computes the square distance first along each of the rows of the image, then modifies these distances by doing some operations along the columns. In higher-dimensional images you can just repeat this last step along the other dimensions. The operation to modify these distance values sounded very much like parabolic erosions to me, so I just gave this a try.