More than just megapixels—what you need to know about your digital camera’s core component
Image sensors in popular D-SLRs range in size from 36x24mm (same as a 35mm film frame, hence their full-frame designation) to 17.3x13mm. Sensor size is important because it affects the angle of view of any given lens, depth of field, the size of the pixels for a given pixel count and cost.
Being the same size, a 36x24mm full-frame sensor "sees" exactly what a 35mm film frame sees, thus any lens used on a D-SLR with a full-frame sensor will frame just as it does on a 35mm SLR. A smaller sensor sees less of the image produced by any lens and so frames like a lens of a longer focal length on a 35mm SLR. In the early days of D-SLRs, this meant that only users of full-frame cameras could do wide-angle photography. On smaller-sensor cameras, 35mm camera wide-angle lenses were no longer wide-angle.
Size relationships between full-frame (35mm), APS and Four Thirds System sensors. Imagine that the gray field represents the image in your viewfinder to more easily visualize the effects of sensor size on focal length, often referred to as magnification or crop factor. Camera makers mitigate this issue by designing lenses specifically for smaller sensors.
On the other hand, the focal-length factor of smaller sensors works in favor of wildlife and sports photographers who need long focal lengths because it instantly increases the effective focal length of any lens. This also means you can use smaller, lighter and less costly lenses to do a given job. For example, the Olympus Zuiko Digital 300mm ƒ/2.8 telephoto for Four Thirds System D-SLRs (which has a focal-length factor of 2.0x) frames like the pro-favorite 600mm ƒ/4 super-tele on a 35mm (or full-frame digital) SLR, but is much smaller, costs a lot less and is a stop faster.
The light-sensitive photosites on conventional CCD and CMOS image sensors don't detect color; they detect only brightness (i.e., how many photons reach them). To provide color information, typical sensors cover the pixels with an array of red, green and blue filters arranged in a Bayer pattern (named for the Kodak scientist who developed it). Each pixel is covered by a red, green or blue filter, so that it receives only red, green or blue light. Through a process called demosaicking, the camera interpolates the other two colors for each pixel, using data from its neighbors and complex proprietary algorithms.
Foveon's X3 sensor (currently used in Sigma's SD14 D-SLR and DP1 compact) actually does detect all three primary colors at every pixel site. More on this later.
Like film, image sensors are assigned ISO speeds based on their sensitivity to light. A given sensor has an innate "native" ISO (100 or 200, for most D-SLR sensors). Higher speeds are produced by increasing the gain before sending the data to the A/D converter; this increases image noise, just as higher-speed films produce bigger grain. Lower-than-native ISO speeds are achieved by adjusting the image data after A/D conversion, and in the process, dynamic range is reduced. So for best image quality, shoot at the sensor's native ISO: the slowest one in the normal (nonextended) ISO range for most D-SLRs.