Ready for a crash course in color? Photographer Tony Northrup takes an in-depth look at what goes on inside our cameras and how, exactly, our images are made:
First, it’s important to understand how color works. Believe it or not, colors are all in our head.
In reality, we’re constantly bombarded with light coming in at a wide range of wavelengths. We can’t perceive most frequencies, and if even if we could, it would be too much information for our brains to handle.
However, some light information is valuable when it comes to navigating the world around us. Without taking in any light information, we wouldn’t be able to find food, avoid predators, or make sense of our surroundings nearly as well. So, our brains assign colors to the frequencies of light long enough to penetrate the atmosphere but short enough to bounce off of objects.
You may be surprised to learn that the way a camera makes sense of light isn’t all that different from what our brains do.
Just like a lens, light travels through the front of our eye, triggering the nerves in the back. Just like a sensor, the brain makes sense of those nerve signals and translates them into an easy-to-digest mental image.
Inside the human eye are red, green, and blue cones.
As you’d expect, green cones pick up green light, red cones pick up red light, and blue cones pick up blue light. But what happens when you see something that falls in between?
If you’re picking up one light frequency that triggers both blue and green cones, for instance, your brain might be able to deduce that you’re looking at a color that falls in between blue and green, such as cyan.
However, it’s also possible to “trick” the brain into seeing a color. If you’re looking at a blue light frequency and an adjacent (but separate) green light frequency simultaneously, your brain may very well combine the two into one uniform shade of cyan in order to simplify things.
Since we’ve not yet figured out how to perfectly record and reproduce every perceivable color, cameras use combinations of red, blue, and green dots to create color photos.
When you take a picture with a digital camera, what your camera is actually doing is capturing light in big grids. Each space in the larger grid, called a photosite, picks up on red, green, or blue light. Most digital cameras create colors using the Bayer color filter, which looks like this:
A four photosite grid makes up one pixel. Adjacent pixels share photosites to create a more cohesive, accurate image.
Using the information about which photosites did or didn’t trigger, mosaic algorithms fill in the gaps in which light information wasn’t recorded.
When pixels are stacked together, they eventually make enough information to form a detailed picture.
The Bayer filter’s inherent flaw is that there will always be bits of information missing from the grid. A few solutions are in production, such as Foveon sensors or pixel shift technologies. However, both still have some major shortcomings to make up for.
Meanwhile, Fuji beats to its own drum with an X-Trans sensor. Because green photosites pick up on the largest range of light, their design has green largely outnumbering blue or red photosites. On one hand, this produces a crisper image with less digital noise. On the other hand, it means that at times there are more color gaps that need to be filled in by the machine.
We’re still far from perfecting digital color reproductions, and there’s plenty to be learned on the subject. Nevertheless, with each passing day, we come a little bit closer to achieving images that match our reality.
Like This Article?
Don't Miss The Next One!
Join over 100,000 photographers of all experience levels who receive our free photography tips and articles to stay current:
Frequencies are not short or long. High frequencies have short wavelengths and low frequencies have long wavelengths.
Otherwise, great nerdy video.
Unfortunately you’re description of the Bayer color filter array is misleading. There are not four photosites for each pixel. Each individual photosite is translated into one pixel during the demosaicing process. The demosaicing process only knows one color component precisely for each pixel and has to estimate the other two based on neighboring photosites. Raw processing programs like Lightroon and ACR occasionally improve their algorithms and release new and improved processes in updates. It can be an interesting exercise to reprocess older raw files with newer software to see the improvement.