Special to Canadian Cinematographer
In early 2011, I attended a seminar about DSLR cameras and techniques in Los Angeles. We got onto the topic of the ALEXA camera, and I asked the instructor of the seminar—the founder of a company that specializes in DSLR news and information—whether or not he knew if the sensor in the ALEXA was Bayer pattern. His response was something along the lines of, “I’m sure it is. Doesn’t Bayer manufacture the best sensor?” A red flag went up, as well as a few eyebrows around the room. Bryce E. Bayer was an inventor who developed a particular method of using a colour fixed array in 1976, not a manufacturer of camera sensors. It was clear our chosen DSLR guru was a little short on details about the products he came to inform us about.
This lack of understanding isn’t isolated to the industry news types or marketers; it’s alive and well among producers, directors and camera people. When it comes to choosing a format or system to use on a project, it can turn into a big discussion and battle about one system over another. Too often, that discussion and decision process is not based on actual tests and results, but on misinformation, hearsay or simply a misunderstanding of marvelous sounding specs.
“Two of the most common misconceptions about digital imaging specs have to do with confusion about Bayer pixels versus RGB pixels and bit-depth versus latitude,” says cinematographer Steve Yedlin (Brothers Bloom, Brick). The reason that there is so much misinformation and misunderstanding is that often people don’t have time to be experts on the subject and rely on quick information. Yedlin says, “But, the real information is complex and subtle and is betrayed by a brief gloss.”
And there is a lot to take in. From sensor technology, colour science, compression algorithms, to resolving powers and Nyquist theories – today’s digital cinema imaging technology is incredibly complex. All of these aspects in a given capture system can have an effect on the images created, although the differences may be difficult to see.
What becomes important is an understanding of how a camera sensor captures images and being able to use that knowledge to create robust images that will hold up through post and onto the screen the way you intend them to be. Simple factors, like the sensor’s native colour temperature (be it daylight or tungsten) can be a determining factor in getting cleaner, less noisy images. Other times it might just be an understanding of the limitations in a camera’s processor and compression scheme.
Recently there has been a lot of attention paid to large resolution. 4K became a common point of interest. Images made on a 4K sensor feel big and look great mostly because the size of the imaging area can create a depth of field only seen before on 35 mm or larger film. But this look and feel are not directly related to the number of pixels, nor does 4K resolution equal sharper or better images. Undoubtedly a high pixel count is important for large images, but there is a point where pixel count can become redundant in terms of increasing image sharpness or image quality.
“I think the most prominent gap is between specs and perceived image quality,” states Bob Primes ASC, who has seen the progression of digital cinema from the earliest HD cameras to the most recent innovations. Primes, who is also an instructor at the American Film Institute, recently led the Single Chip Camera Evaluation (SCCE) in Los Angeles—an impartial series of camera tests that compared 12 digital cameras in a number of settings, including a variety of real on-set challenges.
According to Primes, the marketing is a numbers game, and it doesn't always add up. After testing the ALEXA, the camera came up low at around 590 lines pairs per image height, while the RED tested numerically much higher. "Then when we took the images and blew them up and looked at them, the RED didn’t look any sharper than the ALEXA," he says. "So the specs don’t correlate to what your eye sees.”
Further discussion about why the ALEXA tested lower than expected prompted Primes to talk to its manufacturers at ARRI. Stephan Ukas Bradley, manager of technical services, explained that they purposefully softened the image because people felt that it helped make it feel more film-like. This decision wasn’t something that could be gleaned from reading the specifications alone.
On the SCCE, Primes evaluated perceived sharpness without separating out resolution from contrast. Resolution and contrast are inextricably related as far as perceived sharpness is concerned. It gets complicated to quantify sharpness when a low contrast image with incredible resolution doesn’t look as sharp as a lower resolution image with higher contrast.
“So boiling things down to perceived sharpness is essential. That is the bottom line,” Primes says. He adds that this eliminates just relying on a company’s marketing statement of “look how many lines of resolution we have,” which takes advantage of tech hype just in order to have statistical bragging rights. It is not just as simple as bigger is better, but more so about information accuracy and efficiency.
In most single sensor cameras, the economical and practical way to record the primary colours is by placing coloured filters over each individual photo site and by breaking up the sensor into a variety of red, blue and green filtered photo sites. This creates what is called a colour filter array. By using the colour information from surrounding photo sites, a generally accurate guess can be made as to the RGB value of each pixel in its location in the output image. This process is called interpolation.
In the case of the Bayer pattern sensor, the colour filter array consists of alternating rows of red-green and green-blue filtered photo sites. As the green channel is linked to luminance, more photo sites are assigned for that purpose. The colour information for red and blue is essentially sub sampled and then through an interpolation process, the full colour image is created.
On top of that, the amount of information gathered at each photo site becomes important to the quality to the final image and the information it can reproduce. Too much attention is often given to the finest detail resolvable, but subtle tonal gradations are also important.
This is where bit depth becomes an important factor. Getting good tonal representation means finer sampling. One common misconception is that bit depth equals dynamic range. This is not accurate. Images sampled with higher bit depths can reproduce more shades or colours between the darkest black to the brightest white. For example, 8 bit is 256 different shades or steps from black to white; 10 bit increases these steps to 1024 different shades. This results in smoother gradations and less chance of banding or rounding errors. Yedlin simplifies it to, “Latitude is a measure of how much range of light intensity a sensor can see whereas bit-depth is a measure of how precisely it registers data within that range.”
Every digital camera sensor works on a linear sample of this bit depth information. From there, some cameras then convert that into a logarithmic sample, which allocates more of the sample steps in the darker parts of the image and less steps in the bright end. This helps use the sampling information more efficiently. The human eye detects smaller changes in the darker parts of the image. On the other hand, changes in the brighter parts are not perceived as easily.
For example, a 14 bit linear sample from the sensor can become a 10 bit log sample in the file—with more of the bits allocated to the darker parts of the image and less allocated to the brighter parts. This allows for the same amount of perceived information from the linear 14 bit sample to be retained in the 10 bit log sample. But one has to be careful over the confusion over bit depth and latitude. Yedlin clarifies, “these two attributes which are often confused with one another are actually nearly opposites. Bit-depth is intensive whereas latitude is extensive.”
Now, Bayer pattern interpolation and sensor bit depth reallocation are not necessarily bad things: these are sophisticated algorithms and computations. But understanding that these processes are going on is important. Having a 4K sensor doesn’t necessarily mean more information. And how the bit depth information from the sensor is being allocated plays a big role in the amount of information captured by the camera.
With all the aspects of digital capture improving rapidly, the technology is changing and “overall sensitivity and latitude are the areas that I think really count most,” says Primes. He continues, excited about the low light capabilities of recent cameras, “With that kind of sensitivity and increased latitude, the possibilities are really changing the way we shoot.
Justin Lovell, an associate CSC member and founder of Frame Discreet Transfers in Toronto, likens evaluating digital cameras to evaluating film stocks. Comparatively, each digital system or film stock has its advantages or disadvantages for a given situation, he says. Or the artist might simply just be familiar with how a digital camera or film stock behaves and that familiarity or experience leads them to choose that system.
In the end, the key is to evaluate the footage in the display venue intended for any given project. A side-by-side comparison is more valuable than charts or numbers. As Primes says, “It’s not that [the specifications] don’t at all correlate; it’s just that they don’t correlate very well.”
Originally from Nova Scotia, Guy Godfree is a cinematographer based in New York and Los Angeles.
[ Magazine ][ Archives ][ Search ]