Archive for the ‘tutorials’ Category

The Laplacian of Gaussian filter

Sunday, April 7th, 2019

The Laplacian of Gaussian filter (LoG) is quite well known, but there still exist many misunderstandings about it. In this post I will collect some of the stuff I wrote about it answering questions on Stack Overflow and Signal Processing Stack Exchange.

(more…)

Filling an image with grid points

Tuesday, January 22nd, 2019

Several algorithms require a set of uniformly distributed points across the image. For example, superpixel algorithms typically start with a regular grid. A rectangular grid of points is trivial to draw into an image. One simply generates the set of points of a grid that covers the image, rounds those points to integer coordinates, and sets those pixels. A rotated grid is a bit more challenging–one needs to compute bounds of the rotated grid such that the full image is covered. With other than rectangular grids (for example a hexagonal grid) the math gets a bit more complicated, but this can still be computed. But how to generalize such an algorithm to three dimensions? And to an arbitrary number of dimensions?

I played around with this task for a bit and came up with a very simple, general solution.

(more…)

Color quantization, minimizing variance, and k-d trees

Saturday, May 26th, 2018

A recent question on Stack Overflow really stroke my curiosity. The question was about why MATLAB’s rgb2ind function is so much faster at finding a color table than k-means clustering. It turns out that the algorithm in rgb2ind, which they call “Minimum Variance Quantization”, is not documented anywhere. A different page in the MATLAB documentation does give away a little bit about this algorithm: it shows a partitioning of the RGB cube with what looks like a k-d tree.

So I ended up thinking quite a bit about how a k-d tree, minimizing variance, and color quantization could intersect. I ended up devising an algorithm that works quite well. It is implemented in DIPlib 3.0 as dip::MinimumVariancePartitioning (source code). I’d like to describe the algorithm here in a bit of detail.

(more…)

Union-Find

Wednesday, March 21st, 2018

The Union-Find data structure is well known in the image processing community because of its use in efficient connected component labeling algorithms. It is also an important part of Kruskal’s algorithm for the minimum spanning tree. It is used to keep track of equivalences: are these two objects equivalent/connected/joined? You can think of it as a forest of trees. The nodes in the trees are the objects. If two nodes are in the same tree, they are equivalent. It is called Union-Find because it is optimized for those two operations, Union (joining two trees) and Find (determining if two objects are in the same tree). Both operations are performed in (essentially) constant time (actually it is O(α(n)), where α is the inverse Ackermann function, which grows extremely slowly and is always less than 5 for any number you can write down).

Here I’ll describe the data structure and show how its use can significantly speed up certain types of operations.

(more…)

Having fun with C++11 — how to pass flags to a function

Wednesday, January 4th, 2017

I’ve never been very active posting here, and since I left academia it has been even slower. All of 2016 passed without a single post! Since I now work for a company, it’s become more difficult for me to post about the fun little things that I work with. Nevertheless, I wanted to share a C++ construct that I came up with to pass flags (a collection of yes/no options) to a function.

(more…)