Archive for the ‘algorithms’ Category

The Laplacian of Gaussian filter

Sunday, April 7th, 2019

The Laplacian of Gaussian filter (LoG) is quite well known, but there still exist many misunderstandings about it. In this post I will collect some of the stuff I wrote about it answering questions on Stack Overflow and Signal Processing Stack Exchange.

(more…)

Filling an image with grid points

Tuesday, January 22nd, 2019

Several algorithms require a set of uniformly distributed points across the image. For example, superpixel algorithms typically start with a regular grid. A rectangular grid of points is trivial to draw into an image. One simply generates the set of points of a grid that covers the image, rounds those points to integer coordinates, and sets those pixels. A rotated grid is a bit more challenging–one needs to compute bounds of the rotated grid such that the full image is covered. With other than rectangular grids (for example a hexagonal grid) the math gets a bit more complicated, but this can still be computed. But how to generalize such an algorithm to three dimensions? And to an arbitrary number of dimensions?

I played around with this task for a bit and came up with a very simple, general solution.

(more…)

Color quantization, minimizing variance, and k-d trees

Saturday, May 26th, 2018

A recent question on Stack Overflow really stroke my curiosity. The question was about why MATLAB’s rgb2ind function is so much faster at finding a color table than k-means clustering. It turns out that the algorithm in rgb2ind, which they call “Minimum Variance Quantization”, is not documented anywhere. A different page in the MATLAB documentation does give away a little bit about this algorithm: it shows a partitioning of the RGB cube with what looks like a k-d tree.

So I ended up thinking quite a bit about how a k-d tree, minimizing variance, and color quantization could intersect. I ended up devising an algorithm that works quite well. It is implemented in DIPlib 3.0 as dip::MinimumVariancePartitioning (source code). I’d like to describe the algorithm here in a bit of detail.

(more…)

Union-Find

Wednesday, March 21st, 2018

The Union-Find data structure is well known in the image processing community because of its use in efficient connected component labeling algorithms. It is also an important part of Kruskal’s algorithm for the minimum spanning tree. It is used to keep track of equivalences: are these two objects equivalent/connected/joined? You can think of it as a forest of trees. The nodes in the trees are the objects. If two nodes are in the same tree, they are equivalent. It is called Union-Find because it is optimized for those two operations, Union (joining two trees) and Find (determining if two objects are in the same tree). Both operations are performed in (essentially) constant time (actually it is O(α(n)), where α is the inverse Ackermann function, which grows extremely slowly and is always less than 5 for any number you can write down).

Here I’ll describe the data structure and show how its use can significantly speed up certain types of operations.

(more…)

Computing Feret diameters from the convex hull

Monday, February 13th, 2012

Some time ago I wrote about how to compute the Feret diameters of a 2D object based on the chain code of its boundary. The diameters we computed were the longest and shortest projections of the object. The shortest projection, or smallest Feret diameter, is equivalent to the size measured when physically passing objects through sieves (i.e. sieve analysis, as is often done, e.g., with rocks). The longest projection, or largest Feret diameter, is useful as an estimate of the length of elongated objects.

The algorithm I described then simply rotated the object in two-degree intervals, and computed the projection length at each orientation. The problem with this algorithm is that the width estimated for very elongated objects is not very accurate: the orientation that produces the shortest projection could be up to 1 degree away from the optimal orientation, meaning that the estimated width is length⋅sin(π/180) too large. This doesn’t sound like much, but if the aspect ratio is 100, meaning the length is 100 times the width, we can overestimate the width by up to 175%!

(more…)