Posts Tagged ‘pixel’

More chain code measures

Wednesday, October 13th, 2010

Last month I wrote a post showing how to calculate the perimeter of an object using its chain code. In this post I want to review several more measures that can be easily obtained from the chain codes: the minimum bounding box; the object’s orientation, maximum length and minimum width; and the object’s area. The bounding box and area are actually easier computed from the binary image, but if one needs to extract the chain code any way (for example to compute the perimeter) then it’s quite efficient to use the chain code to compute these measures, rather than using the full image. To obtain the chain codes, one can use the algorithm described in the previous post, or the DIPimage function dip_imagechaincode.


How to obtain the chain code

Monday, September 27th, 2010

In the previous post I discussed simple techniques to estimate the boundary length of a binarized object. These techniques are based on the chain code. This post will detail how to obtain such a chain code. The algorithm is quite simple, but might not be trivial to understand. Future posts will discuss other measures that can be derived from such a chain code.

In short, the chain code is a way to represent a binary object by encoding only its boundary. The chain code is composed of a sequence of numbers between 0 and 7. Each number represents the transition between two consecutive boundary pixels, 0 being a step to the right, 1 a step diagonally right/up, 2 a step up, etc. In the post Measuring boundary length, I gave a little more detail about the chain code. Worth repeating here from that post is the figure containing the directions associated to each code:

Chain codes

The chain code thus has as many elements as there are boundary pixels. Note that the position of the object is lost, the chain code encodes the shape of the object, not its location. But we only need to remember the coordinates of the first pixel in the chain to solve that. Also note, the chain code encodes a single, solid object. If the object has two disjoint parts, or has a hole, the chain code will not be able to describe the full object.


Measuring boundary length

Tuesday, September 14th, 2010

Oftentimes we segment an image to find objects of interest, and then measure these objects — their area, their perimeter, their aspect ratio, etc. etc. Measuring the area is accomplished simply by counting the number of pixels. But measuring the perimeter is not as simple. If we simply count the number of boundary pixels we seriously underestimate the boundary length. This is just not a good method. A method only slightly more complex can produce an unbiased estimate of boundary length. I will show how this method works in this post. There exist several much more complex methods, that can further improve this estimate under certain assumptions. However, these are too complex to be any fun. I’ll leave those as an exercise to the reader. 🙂

Because we will examine only the boundary of the object, the chain code representation is the ideal one. What this does, is encode the boundary of the object as a sequence of steps, from pixel to pixel, all around the object. We thus reduce the binary image to a simple sequence of numbers. In future posts I’ll explain a simple algorithm to obtain such a chain code, and show how to use chain codes to obtain other measures. In this post we’ll focus on how to use them to measure boundary length.


Why do we keep using the word “pixel”?

Thursday, January 8th, 2009

According to Wikipedia, the ultimate source of knowledge, the word pixel comes from “picture element.” This means that a pixel is a part of a picture, and yet everybody I know keeps using that term to refer to a part of an image. Because it is “image analysis” that we do, right? Not “picture analysis”? Of course, this is just an insignificant detail that I’m blowing way out of proportion. No, my real beef with the word pixel is more complex.

The word pixel seems to be used only in the context of 2D images. For 3D images we have a different word: voxel. So if I have a 3D image, and take one 2D slice out of it by selecting a set of voxels, these voxels all of a sudden, magically, become pixels! And what happens when you record a multi-spectral volumetric image? Or a volumetric time series? What do you call the elements of a 4D, 5D or 10D image? Dean et al. use the word imel (for image element) in their ICS file format specification (P. Dean et al., “Propsed standard for image cytometry data files”, Cytometry 11(5):561-569, 1990, DOI:10.1002/cyto.990110502). Yes, it’s more general. Yes, it’s more awkward. And yes, it still hides the fact that the images that we analyse are sampled and discretised representations of some continuous reality. When we sample a function we obtain samples, not pixels. The data sets that we analyse are collections of samples. A digital image is composed of samples, whether it be a 2D image, a 3D image or a 10D image. Only after you paint a little rectangle on the screen with the color of your sample does it become a pixel.

PS: I’m suggesting the word lixel for samples in a 1D signal. You heard it here first!