One simple way to draw that line is, starting from the mask image `w`

generated in the linked code, to take the difference between the dilation and the erosion (leading to a thin line at the boundary), then using that as a mask to set those pixels to red:

t = bdilation(w,1,1) - berosion(w,1,1); out(t) = {255,0,0};]]>

How can i obtain the image with the seamline as show in bigsur_new_overlay.jpg with the help of the code?

Thanks in advance

]]>i optimized only this value considering others as constant.

my idea is that more number of non-corner points to lead to straight line and this results in error because the real world objects may be irregular in shapes deviating from square , rectangle..etc

Feel free to upload your graph to one of the free image hosting services (e.g. imgur.com) and link it here. Two things:

– Did you optimize this value based on the square shapes I use in this example here, or did you consider other shapes as well? I recommend you look through the Vossepoel and Smeulders paper to see how they did their optimization, and replicate that.

– Did you optimize only this new value, keeping the others constant? You should consider optimizing all 4 constants in your equation simultaneously.

One thing that just occurred to me is that count(noncorner) could be equal to total-count(corner), is it? — If so, this count cannot be adding any new information. I also notice that the new constant has zeros for the first three decimal places, which is the number of places at which the other constants have been rounded to. You might have recovered some of the lost precision from the other constants?

]]>perimeter =count(even)∗0.980+count(odd)∗1.406−count(corner)∗0.091+count(noncorner)∗0.000132

where noncorner is the number of times the non-corner points occur. i have the graph which shows that absolute error decreases with this modified equation. is there any way i can attach the graph here ? ]]>

You can right-click on the top two images of this post and select “Download”. They’re downsampled from the originals, but this is how I used them in this post. In the script linked from the post, they’re the `'bigsur1.jpg'`

and `'bigsur2.jpg'`

files.

As far as I can tell, most of the code would work on grey-value images. This line here:

`d = 255 - max(abs(a-b));`

would need to be changed to

`d = 255 - abs(a-b);`

Please run the code line by line and see where it goes wrong, then try to fix that line. The part in the loop doesn’t concern color images, so that part is OK for sure.

]]>how should i work on grayscale images for this code? ]]>

That’s an interesting thought, I hadn’t considered that before.

If you have a sequence of one repeated code, you have a straight line along one of the axes or at 45 degrees from an axis (e.g. 0,0,0,0,0 or 1,1,1,1,1). Taking these into account could improve quantification for shapes with straight edges that are aligned with the sampling grid (such as rectangles, diamond and octagon shapes). But for lines at any other orientation you would get more complicated sequences. These are difficult to identify.

For generic shapes with long straight edges at arbitrary angles, an approach could be to record the outline as a polygon (i.e. keep the pixel’s coordinates rather than encoding as a chain code), and simplify this polygon using the Douglas–Peucker algorithm. The allowed error should then be half a pixel (so that sequences of chain codes like 0,1,0,1,0,1,0,1 or 3,4,3,4,3,4,3,4 become a single polygon edge). It is not clear to me that this would result in a correct object outline, but it *might* produce a more precise result. You’d have to test to make sure this is actually the case. Let us know if you do!