Tuesday, February 3, 2009
D-SLR State Of The Art, Part II
What to know about the latest digital sensors, ISO and image quality
The CCD and CMOS image sensors used in D-SLRs have an innate sensitivity to light, generally in the ISO 100-200 range. Higher speeds are produced by increasing the gain—amplifying the signal produced by the sensor. When you set your D-SLR to a higher ISO, the sensor’s sensitivity doesn’t change; the camera just amplifies its signal. Unfortunately, the amplification process also increases image noise, much as push-processing film makes images “grainier.”
When you set a lower speed than the sensor’s native speed, the camera’s image processor adjusts the image data in a different way and the dynamic range is reduced in the process. So for best image quality, use the sensor’s native ISO whenever possible. How do you find it? It’s the lowest one in the camera’s “normal” ISO range. For example, the Nikon D300 provides a normal ISO range of 200-3200, plus expanded settings down to 100 and up to 6400; best image quality will occur at ISO 200.
From Photon To Photo
CCD and CMOS sensors work somewhat differently, but essentially each consists of a grid of light-sensitive picture elements, or pixels. When you make an exposure, light strikes these pixels, and each pixel collects photons (light particles) in proportion to the amount of light that strikes it. Pixels representing bright areas of the scene collect lots of photons; pixels representing dark areas collect fewer photons.
The sensor then transforms the collected photons into electrical charges and converts those into voltages that can be read by the camera’s A/D converter, which converts the analog data into digital data ready for processing. If the now-digital image is to become a JPEG, the data goes to the camera’s image processor where it’s demosaiced (converted to color), sharpened and otherwise processed, and then converted to 8-bit JPEG and compressed. If the image is RAW, the digital data goes straight to the memory card in the camera, and you process it yourself in your computer, using your camera manufacturer’s RAW-conversion software or a third-party solution such as Adobe Camera Raw or DxO Optics Pro.
At several points along this path, noise may be introduced. The electronic activity in sensors produces some noise (nonimage artifacts), amplifying the image at various points adds more noise, and still more noise is produced as the signal travels along the pipeline. High temperatures also increase noise.
Camera manufacturers have developed a number of ways to reduce noise.
1. Newer image processors use better noise-reduction algorithms.
2. New amplifier designs produce less noise. Nikon’s D3 uses three stages of amplification circuitry to minimize accumulated noise.
3. Decreasing the distance that signals have to travel before A/D conversion also reduces noise; with higher-end Nikon and Sony D-SLRs, and every Canon EOS except the original EOS-1D, the A/D converters are on the image sensor itself.
4. Multichannel parallel processing allows for slower clock speeds in each channel, which reduces noise.
5. Some cameras perform noise reduction both before and after A/D conversion, i.e., on both the analog and digital signals.
There are two types of image noise: chrominance noise (color blotches) and luminance noise (more like film grain). Some manufacturers provide stronger correction for chrominance noise, others for luminance noise. I can stand a little film-like grain, but can’t abide color blotches, so I favor stronger correction for chrominance noise. Some RAW-processing software and noise-reduction software let you adjust luminance and chrominance noise independently.
Page 2 of 5