Cris’ Image Analysis Blog

theory, methods, algorithms, applications

On color and sharpness perception

I was recently pointed at a blog post entitled “why we’re blind to the color blue”, which does an interesting experiment: It takes an image, blurs only the green channel, and shows there’s a strong change in perceived sharpness. It then blurs only the blue channel, and shows little change in perceived sharpness. The conclusion is that we perceive sharpness only for green and red light, not blue light. Here I want to point out the problems with the experiment, the problems with the explanation, and shed a light on what is actually going on.

The blog post starts off by pointing out how a lens doesn’t focus all colors of light at the same point. When forming an image with a simple lens, you can get one color in focus, other colors will be out of focus to varying degrees. This is called chromatic aberration, and we use complex compound lenses in cameras to circumvent this property. Our eyes form quite simple lenses, which suffer strongly from chromatic aberration.

The blog post next shows the spectral sensitivity of the three types of cones on our retina (blue, green and red), in which we see that green and red cones have a very similar response (their peaks of sensitivity are close together), and blue cones have a markedly dissimilar response. The reasons behind this, and how the corresponding photosensitive proteins evolved, is a very interesting story, see the box below. Anyway, the author of the blog post makes the assertion that

[…] it appears that the best position for the focal plane is between the red and green focal points. This is exactly what happens in the eye.

The experiment, aiming at proving this assertion, uses an image that has very little color information. It is a picture of the earth, where the blue oceans are very dark, and most of the landmasses are covered in clouds. In fact, most of the contrast is formed by the white clouds against the dark oceans.

Blurring the green channel only, or the blue channel only, affects both the perception of how light a pixel is (lightness), and the perception of what color a pixel has (chromaticity). The green channel contributes much more than the blue channel towards lightness, by a factor of approximately 10. Thus, blurring the green channel has about ten times as strong an effect on the lightness as blurring the blue channel. As I’ll show below, it is the lightness that gives us a sense of sharpness. This is the reason that the green channel affects perceived sharpness more than the blue channel.

Light-sensitive proteins in our cones

Eyes have evolved multiple times independently. But the light-sensitive protein (opsin) appeared only once. It has mutated and evolved to many variations, sensitive to different frequency bands. The mantis shrimp has 12 different types. Early mammals had 4, but because there was a time in our history where mammals were burrowing night creatures with no need for color vision, they lost two, leaving most modern mammals with only 2 (blue and green). Primates eventually evolved a 3rd photoreceptor, red, mutated from the green one. This is the reason our red and green cones have such similar frequency response.

Both the red and green light-sensitive proteins are encoded for in the X chromosome. Missing one of these causes protanopia or deuteranopia (red-green color blindness), which is much more common among men than women because women have two X chromosomes and therefore are less likely to be missing one gene on it. However, a more common form of red-green color blindness are protanomaly and deuteranomaly, where the red light-sensitive protein has mutated and is more similar to the green one (protanomaly) or vice versa (deuteranomaly). This makes the distinction between red and green much smaller, but still perceivable. [Note that I’m simplifying things a lot here, there are lots of genes involved in color vision, see Wikipedia for a more detailed overview.]

Because women have two X chromosomes, it is possible for one chromosome to encode such a mutated protein, and the other to encode a normal one. This results in the woman having one more different type of light-sensitive protein, for a total of 4. Such a person is called a tetrachromat, and can, in theory, perceive more colors than the rest of us.

So let’s do an experiment with this colorful image:

Input image

We will use a color space called CIELAB (or L*a*b*). This is a color space that encodes our perception. The CIE (Commission internationale de l’éclairage, or International Commission on Illumination) has done significant studies on visual perception, and has performed detailed measurements of the “average observer’s” response to color. Using these measurements, they defined the CIELAB color space, as a way to encode color that approximates perceptual uniformity. This means that the distance between two nearby colors in this color space indicates how large the perceptual difference between those colors is. The interesting thing about this color space is that it separates lightness (the L* channel) from the chromaticity (the a* and b* channels, a* is the red-green axis and b* is the blue-yellow axis). Thus, we can separately blur the lightness channel, and the two chromaticity channels, and see the effect of their blurriness on our perception.

We will use MATLAB with DIPimage for this experiment. First we load the image and convert to the CIELAB color space:

a = readim('https://www.crisluengo.net/images/color_perception_input.png');
a = colorspace(a,'lab')

Next we blur only the lightness channel to obtain a blurry image:

b = a;
b{1} = gaussf(b{1},2)

Image with lightness channel blurred

Note how the edge of the hat to the gray background, on the left side of the image, doesn’t seem as blurry. The lightness of the hat and the background is very similar here, and so the blurring has little effect.

Next, we blur the two chromaticity channels:

c = a;
c{2} = gaussf(c{2},2);
c{3} = gaussf(c{3},2)

Image with both chromaticity channels blurred

What? Did we do anything to the image? The most obvious blur happens along the edge of the hat, at the same place where we didn’t see a lot of blurriness when we blurred the lightness channel.

Let’s filter more strongly:

c{2} = gaussf(c{2},6);
c{3} = gaussf(c{3},6)

Image with both chromaticity channels blurred very strongly

Now we see a stronger effect of color mixing across edges. But the image still looks perfectly sharp! It really is the lightness that tells our eyes if something is sharp or blurry. Here are the blurred a* and b* channels from this last image:

very strongly blurred a channel

very strongly blurred b channel

Yes, they really are blurred.

In fact, the JPEG encoding standard makes use of this property of our vision to reduce the amount of data to store for a picture. It transforms the image to the Yuv color space (a color space similar to CIELAB, but not perceptually uniform; it is much cheaper to convert between RGB and Yuv than between RGB and CIELAB), and then subsamples the u and v channels. The resulting image is still very similar to the original perceptually, but if you look closely you can notice the color bleeding. I used the PNG format for the images above, instead of JPEG, to avoid confusion about the results of the experiment.

Questions or comments on this topic?
Join the discussion on LinkedIn