thank you very much for your anwer. I’ll try your suggestions as well.

For the moment, I tried a slighly anisotropic sampling and a very anisotropic one (dx/dz 0.85 and 5, respectively) with the rotating square and the circles with increasing diameter.

I tested the modified simple method I posted before, the original Vossepoel and Smeulders corner count, and then optimized the parameters in the corner count method considering different weithgs for cornes and for horizontal, vertical and diagonal chain codes (4 parameters in total). This was done for each specific pixel sampling.

The modified simple method performed decently in both cases. Vossepoel and Smeulders only worked for the slighly anisotropic sampling. The optimized corner count parameters only worked for their specific grid.

]]>Mónica,

I have not seen an implementation of the corner count method that works with anisotropic sampling. Your approach seems sound. You might need to count different types of corner depending on the orientation? Not sure.

There is another way to estimate perimeter that would be much simpler to adapt to anisotropic sampling. This method is only good if the object perimeter is smooth enough (i.e. has low curvature):

1. Obtain a polygonal representation of the binary object boundary.

2. Smooth the polygon by applying a Gaussian filter to the array of x coordinates, and to the array y coordinates (separately). A sigma of 2 here tends to give a good polygon that matches the binary shape but doesn’t show much of the staircase effects. But if the shape has high curvature, it will be distorted.

3. Compute the length of the polygon by adding the Euclidean distance from vertex to vertex. Here is where you put the *a* and *b* weights in.

The polygonal representation I recommend you obtain using the ideas presented by Steve Eddins in this blog post. DIPlib has an implementation of this: `dip::ChainCode::Polygon()`

(it’s a function that converts a chain code to such a polygon). The returned object can be smoothed with the `dip::Polygon::Smooth()`

function. To get the chain code object, use `dip::GetImageChainCodes()`

. In the DIPimage (the MATLAB toolbox), the function `traceobjects`

does all of that for you.

And, of course, the obvious simple thing to do instead of all of that is to resample the original image so that it is isotropic (it’s important to resample the gray-scale or color image before segmentation, to get the best results).

]]>Thank you for your blog. It is very interesting. Right now I am trying to estimate the perimeter of a rounded figure assuming rectangular pixels intead of square ones. If my pixel is a x b, for the simple method I would consider

p = a*sum(0 or 4) + b*sum(2 or 6) + sum(~even)*sqrt(a2+b2)

To consider corners as well, my idea is to modify the Vossepoel and Smeulders equation. I will try to post my results when finding them. In the meantime, are you aware of any other method for this (rectangular pixels)?

]]>i optimized only this value considering others as constant.

my idea is that more number of non-corner points to lead to straight line and this results in error because the real world objects may be irregular in shapes deviating from square , rectangle..etc

Radhika,

Feel free to upload your graph to one of the free image hosting services (e.g. imgur.com) and link it here. Two things:

– Did you optimize this value based on the square shapes I use in this example here, or did you consider other shapes as well? I recommend you look through the Vossepoel and Smeulders paper to see how they did their optimization, and replicate that.

– Did you optimize only this new value, keeping the others constant? You should consider optimizing all 4 constants in your equation simultaneously.

One thing that just occurred to me is that count(noncorner) could be equal to total-count(corner), is it? — If so, this count cannot be adding any new information. I also notice that the new constant has zeros for the first three decimal places, which is the number of places at which the other constants have been rounded to. You might have recovered some of the lost precision from the other constants?

]]>perimeter =count(even)∗0.980+count(odd)∗1.406−count(corner)∗0.091+count(noncorner)∗0.000132

where noncorner is the number of times the non-corner points occur. i have the graph which shows that absolute error decreases with this modified equation. is there any way i can attach the graph here ? ]]>

Radha,

That’s an interesting thought, I hadn’t considered that before.

If you have a sequence of one repeated code, you have a straight line along one of the axes or at 45 degrees from an axis (e.g. 0,0,0,0,0 or 1,1,1,1,1). Taking these into account could improve quantification for shapes with straight edges that are aligned with the sampling grid (such as rectangles, diamond and octagon shapes). But for lines at any other orientation you would get more complicated sequences. These are difficult to identify.

For generic shapes with long straight edges at arbitrary angles, an approach could be to record the outline as a polygon (i.e. keep the pixel’s coordinates rather than encoding as a chain code), and simplify this polygon using the Douglas–Peucker algorithm. The allowed error should then be half a pixel (so that sequences of chain codes like 0,1,0,1,0,1,0,1 or 3,4,3,4,3,4,3,4 become a single polygon edge). It is not clear to me that this would result in a correct object outline, but it *might* produce a more precise result. You’d have to test to make sure this is actually the case. Let us know if you do!

Thanks for information provided. i have one question can i modify the equation provided by Vossepoel and Smeulders in which i will consider the number of times the same number chain code occurs ? ]]>

Connor,

That paper [Vossepoel and Smeulders, 1982] uses *n* as the total number of chain codes, and *m* as the number of odd codes. They write *a _{n}n* +

Compare to the Euclidean case, where even codes get 1 and odd codes get √2. An optimized method should have similar values to those. If you were to give odd codes only 0.426, you’d be undervaluing their contribution significantly! Why would a diagonal step be shorter than a horizontal step?

]]>Isn’t the coefficient for ‘odd’ orientations 0.426 instead of 1.406? Referencing the equation on page 362 of Vossepoel and Smeulders. Let me know your thoughts if you get a chance, I’m trying to analyze microparticle images in which the particle radius is approximately 6 pixels.

Cheers! Connor 🙂

]]>