1.7 The Histogram Display

Brightness range, sensors, and the human eye

silhouette

Figure A-1: Limited Brightness Range can lead to artistic images. You can create silhouettes on purpose by exposing for the sky (via AEL and Spot metering) and then recompose, focus, and shoot

Have you ever seen something which looked really cool, only to take a picture of it and have it come out looking darker (much darker) and ‘muddier’ than the way you remembered it? Why wouldn’t the picture look exactly the way you remembered seeing it?

It turns out that the answer to this question is far from easy. But the short answer is: cameras (film or digital) see light differently from the way the human eye and brain do.

To understand this difference, have a look at the picture in Figure A-1. When I took this picture, the scene didn’t look like this to the naked eye. I could see the skateboarders quite plainly, right down to the color of their clothes and the stickers on their skateboards. But film and digital cameras cannot see the same range of light as the human eye can. (This is by design. Long story. I explain it all in my seminar.) In the vast majority of cases you can either capture the sky, or the foreground, but not both, as illustrated in Figure A-2.

So for the skateboarding silhouette above, I chose to expose for the sky, intentionally leaving the subject to be rendered as black.

Figure A-3 gives a good comparison of the range of sensitivity of the human eye, color negative film, and digital cameras. In the figure, a “stop” means “a factor of two” in light intensity. So when it says a digital sensor can sense a range of brightness of 8 stops, it means that the brightest part of the picture is no more than 28 = 256 times brighter than the darkest part of the picture. Put another way, if you were using the spot metering feature of the camera and you were to measure the brightest and darkest parts of your scene, and the brightest part reads 1/1,000th of a second, then the darkest part must read no less than ¼ of a second (8 stops away) for everything to be visible.

Image

Figure A-2: A real-world example. Unlike the human eye, with digital cameras you can either capture the sky (left) or the subject (right), but not both.

This is a really, really important concept to understand. Your eye can see a greater brightness range than can film or digital. Film or digital can see a greater brightness range than the camera’s LCD. This means when you look at a scene using the LCD, you’re not seeing all the detail that the digital sensor can capture. In reality, you’re seeing about 90% of the light range, and for the vast majority of shots, this is great and useful and wonderful.

Once you understand the important concept of reduced brightness sensitivity range, it becomes easy to understand why Fill Flash is sometimes used to make the subject look good on film (or digital) even though they look perfectly good to the unaided eye. It also explains why the motion picture industry uses gigantic studio lights in their productions, only to have the scene look perfectly normal when you see it in the theatre. It is because for an image to look normal, the brightest part of your scene must be no more than 8 stops brighter than the darkest part of your scene. If it is more than 8 stops, the camera will not be able to capture it all, and some information will be lost – perhaps areas in the darkest part will become deep black, or the lightest part will “blow out” and be so white that you can’t make out any detail.

Brightness Range illustration

Figure A-3: The dynamic range of several types of media.

In the previous silhouetted skateboarder image, the range of light in the scene was indeed greater than 8 stops, and the information in the darker parts (where the skateboarders were) was lost, resulting in the darker parts looking black. (So, sometimes the limited range of a sensor can be used for artistic purposes. But far more often it results in frustration because the camera was not able to capture what you remember seeing.)

In the days of film, such loss of information usually came as a surprise to the photographer after the developed film came back. But, at least with digital cameras you can get a good idea of whether or not the camera captured the brightest and darkest parts of your scene.

Using the Histogram for a finer degree of control

Image

Figure A-4: A simplified view of how histograms work.

So, all of the above was a prelude to the Histogram function. The histogram display simply shows you where the brightness in your image “falls” within the 8-stop range. It is useful when you are shooting subjects that are predominantly white (like a bride in a wedding dress) or black (like portraits of black cats on black backgrounds), and you need to know if the sensor is capturing the detail that the LCD cannot show you. It's also doubly-useful when you're reviewing your images outdoors on a bright day and your LCD screen is getting washed out by the sun. Being able to see what you captured graphically can be a stress-reducer out in the field! The histogram shows you the range of brightness values in your image, rearranged in order, with the most frequently-occurring brightnesses being taller.

Figure A-4 shows an illustration of how histograms relate to the scene being captured. Let’s say that the collection of black, white, and grey boxes in the upper-left-hand corner represent the pixels of your (very low-resolution) digital camera. The histogram simply re-arranges the pixels in order of ascending brightness; the brightest to the right and the darkest to the left. Pixels with the same brightness value get “stacked” on top of each other. The resulting graph shows the brightness distribution of the image; where the brightest parts and darkest parts fall within the camera’s sensitivity range.

Okay, so how do you use this information? Remember that the right edge of the histogram represents the brightest value the sensor can capture, and the left edge represents the darkest value the sensor can capture. It is important that the tallest parts of the graph (representing the dominant shades in your image) are not clumped up at the left-edge or right-edge; for if they are, it means that the brightness level of these pictures is exceeding the sensor’s brightness range. It’s also important to remember that there is no such thing as a standard-looking histogram for all pictures – you use the histogram to make sure that the brightnesses in the image fall where you want them to fall for the kind of image YOU intend to create.

Image

Figure A-5: One of the selectable playback displays shows a histogram of your image.

You can view an image’s histogram while it is still in the camera. While in Playback mode, hit the DISP (up-arrow) Button multiple times until you get the histogram playback screens (Figure A-5).

Let’s start with some simple examples:

TIP: There is a quick and easy way to tell if your image contains any blown-out highlights or too-dark shadows. When you playback the image in histogram mode, the parts of the exposure that are “off the scale” will blink. A VERY useful feature!!

 

Image

Here’s a picture of a Cuban boy against a dark-ish background. Since there is more dark than bright in the picture, this is reflected in the histogram, which shows more dark pixels than light ones. Notice that the blacks are not SO black that they bump up against the left edge – this is perfect for this shot. Black, but still showing detail.

Image

Here’s a truly average scene, with brightnesses spread out pretty evenly across the horizontal axis. The black spike you see near the left-hand edge represents the black in the roadsign. As you can see, the tall spike means there are more black pixels than of any other single color. (There are many different shades of blue, which is why there’s no large spike in the center.) Here it is OK if the blacks are outside the range, for we don’t need to see detail in the black part of the sign.

Image

Here’s a picture of a grey piece of paper.

Here the histogram looks exactly as you expect it would – all grey pixels stacked up high, with no lighter or darker pixels anywhere (i.e., nothing to the left or the right of the spike). (Well, nothing significant…)

Image

Here is an image comprised entirely of black, white, and grey. Here, we expect to see 3 spikes: Black on the left, white on the right, and grey in the middle. (And we do!)

Image

Oh no! I just took this picture, and the camera’s LCD screen makes the white building look washed out and overexposed! Is it?? Let me check the histogram…. WHEW! According to the graph on the right, the vast majority of the white in the image is within range. Only a tiny white spike on the rightmost edge – represented by the whitest part of the clouds above – is “blown out”, which for this picture is acceptable. (How do I know it’s the clouds and not the building? Because in Histogram Playback view, the “blown out” part of the clouds blinks.)

Image

Another real-world situation. I had just taken the picture of this bird, and I couldn’t tell by looking at the LCD screen whether the whites were blown out or not. A quick histogram check indicated that they were indeed blown out! (See circled area – plus, the blown-out parts of the image were blinking in Histogram view.)

Image

I immediately set the camera to underexpose by ½ stop and shot again. All of the histogram shifted to the left (thus the entire picture got darker), and now the blown-out portion is safely captured within the camera’s available dynamic range. Hooray!

Luckily the bird was still there when I took the 2nd shot. It’s situations like these for which Auto Exposure Bracketing was invented – take several at different exposures NOW; I’ll choose the best one on the computer later.

It’s hard to see the difference in these tiny thumbnails, but if I were to make an enlargement of this picture, the lack of detail in the bird’s feathers would definitely have been noticeable!

Image

Remember, there is no such thing as an average-looking or “correct” histogram shape – each will be different and depends entirely on the kind of image you were looking to create.

For this picture, it was perfectly OK to have some blacks be so dark that there’s no detail, as long as the highlights on the face were captured properly. As you can see in the histogram, the face details were captured just fine and there’s no “blow out” of the highlights.

1.8 The “Secrets” of Light and Composition

Okay, you now know the basics of how to use your camera. But having a sophisticated, capable camera is only part of the formula for better pictures. Behold! The remaining secrets to great photography are herein revealed!!

Let’s start with the pie chart in Figure A-6 below, which shows the relative importance of all the different variables that comprise a really good photograph.

Image

Figure A-6: The elements of a good photograph. (Notice that “How expensive your camera is” is not a variable.) Point-and-shoots can take great pictures too using the techniques outlined in this appendix.

Notice that the two biggest variables, by far, are ‘composition’ and ‘quality of light’. Not how many megapixels your camera has, or how expensive your lens. As I will explain below, armed with these techniques, your pictures taken with your camera can make people say, “Wow!”!