Feel free to upload your graph to one of the free image hosting services (e.g. imgur.com) and link it here. Two things:

– Did you optimize this value based on the square shapes I use in this example here, or did you consider other shapes as well? I recommend you look through the Vossepoel and Smeulders paper to see how they did their optimization, and replicate that.

– Did you optimize only this new value, keeping the others constant? You should consider optimizing all 4 constants in your equation simultaneously.

One thing that just occurred to me is that count(noncorner) could be equal to total-count(corner), is it? — If so, this count cannot be adding any new information. I also notice that the new constant has zeros for the first three decimal places, which is the number of places at which the other constants have been rounded to. You might have recovered some of the lost precision from the other constants?

]]>perimeter =count(even)∗0.980+count(odd)∗1.406−count(corner)∗0.091+count(noncorner)∗0.000132

where noncorner is the number of times the non-corner points occur. i have the graph which shows that absolute error decreases with this modified equation. is there any way i can attach the graph here ? ]]>

You can right-click on the top two images of this post and select “Download”. They’re downsampled from the originals, but this is how I used them in this post. In the script linked from the post, they’re the `'bigsur1.jpg'`

and `'bigsur2.jpg'`

files.

As far as I can tell, most of the code would work on grey-value images. This line here:

`d = 255 - max(abs(a-b));`

would need to be changed to

`d = 255 - abs(a-b);`

Please run the code line by line and see where it goes wrong, then try to fix that line. The part in the loop doesn’t concern color images, so that part is OK for sure.

]]>how should i work on grayscale images for this code? ]]>

That’s an interesting thought, I hadn’t considered that before.

If you have a sequence of one repeated code, you have a straight line along one of the axes or at 45 degrees from an axis (e.g. 0,0,0,0,0 or 1,1,1,1,1). Taking these into account could improve quantification for shapes with straight edges that are aligned with the sampling grid (such as rectangles, diamond and octagon shapes). But for lines at any other orientation you would get more complicated sequences. These are difficult to identify.

For generic shapes with long straight edges at arbitrary angles, an approach could be to record the outline as a polygon (i.e. keep the pixel’s coordinates rather than encoding as a chain code), and simplify this polygon using the Douglas–Peucker algorithm. The allowed error should then be half a pixel (so that sequences of chain codes like 0,1,0,1,0,1,0,1 or 3,4,3,4,3,4,3,4 become a single polygon edge). It is not clear to me that this would result in a correct object outline, but it *might* produce a more precise result. You’d have to test to make sure this is actually the case. Let us know if you do!

Thanks for information provided. i have one question can i modify the equation provided by Vossepoel and Smeulders in which i will consider the number of times the same number chain code occurs ? ]]>

No, I had not considered it before. If someone wants to contribute bindings to Go (or any other important language), I’m more than happy to accept them into the repository… Are you volunteering? 🙂

I did think of Julia some years ago, but the people running that project seem to think everything should be written natively in Julia, they don’t care about bindings. :\

]]>Have you considered bindings for Go? I’m experimenting a bit with and I like. These of use, also wrt concurrency & speed seems to make it well suited for image processing tasks!

]]>