"Debayering" special report

By Madelyn G. Most

La Lettre AFC n°234

Since their last meeting at Camerimage 2012, Philippe Ros, AFC, and Roberto Schaefer, ASC, AIC, have an ongoing conversation about what really makes up the look of the captured image from various digital cameras, and how people in all disciplines of the industry are overlooking other important elements to be concerned with, other than 4K, or2K, or 8K capture or which camera to use, such as :
- How and when the debayering is implemented after the recording of the files
- The curves implemented in the camera (i.e. Arri Alexa or Sony F35), or those implemented in post (i.e. Lin to Log with the Sony F65)
- Which system to use to transcode and intake those image files.

Ros has toured 17 countries in Europe demonstrating how he used modified Log curves and different systems for grading, (Resolve, Colorfront, Lustre, Baselight, etc.) that gave final results so wildly diverse, it must now be acknowledged any discussion about image quality goes way beyond the choice of camera.

“I discovered I didn’t know much about debayering and demosaicing and that each manufacturer has their own way of interpreting what goes into the camera and the way it comes out” says Roberto Schaefer. “I think cinematographers should not be locked into decisions made by the manufacturers or post production facilities, we should know how to alter the curves and be able choose which digital laboratory to use for the post in order to understand the camera workflow all the way to the DI suite. If not, we are losing 90% of our real possibilities with any camera system. It’s akin to choosing the film stock, to flash or not, push or pull, to use ENR or any other special process, and especially to choose the film lab where you have some control over your negative so that it’s processed the way you desire.

In the digital realm, we should know where we’re going to be when we finish the film, which company will be doing this work and to test it beforehand to see the results before going into the final DI. At the moment we are seeing things we shot that looked one way and they are coming out looking totally different to what we expected”.
and THAT was the context for the almost two hour online meeting July 12th between Los Angeles, New York, and Paris with a panel that included :
- Roberto Schaefer, ASC, AIC : is a cinematographer who lives in LA and New Orleans,
- David Stump, ASC : is a D.P., Visual Effects D.P., Visual Effects Supervisor, (PGA), (SMPTE), (ATAS), (AMPAS) Chairman of the Camera and Metadata Subcommittees of the ASC Technical Committee,
- Michael Most : is VFX Supervisor and Digital Effects Supervisor, Location Services Director at Technicolor L.A responsible for deploying dailies and finishing systems,
- Lars Borg : is Principal Scientist at Adobe focusing on color management and image processing technologies developing solutions, specifications and standards for digital cinematography,
- Mitch Gross : is Applications Specialist at AbelCine in NY, responsible for camera technology integration and functionality,
- Robert Monaghan : is CEO and Software Engineer at Glue Tools LLC, who create software tools for digital cinema cameras, including camera control products and post workflow plug ins,
- Philippe Ros, AFC : is a French cinematographer, Imago Technical Committee Member, works as Digital Imaging Supervisor, writes and lectures on technical research that explain camera and workflow systems,
- Tommaso Vergallo : is Managing Director of Digimage, a digital film lab and post-production facility in Paris that designs workflows for different camera systems,
- Madelyn Most : camerawoman/photographer/filmmaker based in Paris and London writes about cinematography and edited this discussion into this article we all collaborated on.

Debayering* is something many cinematographers have little knowledge of, so our conversation kicked off with “How many different ways and approaches are there to debayering* on any given camera ?”

“INFINITE”, replied Lars Borg. “Debayering is a process of uncertainty. What you have available coming out of the camera is only one third of the information on the set. Two thirds is discarded in the camera sensor’s color filter array. The debayer algorithm’s purpose is to guess the missing pixel colors. Many methods have been developed, most of them making different assumptions about the missing colors.
Debayering is extra problematic for moving images. The image needs to look consistent from frame to frame but when a small object moves side to side, like a swinging wire or newspaper print moving in front of the lens, the sensor data changes from frame to frame. The debayer needs to reconstruct a consistent shape for the results to look acceptable.
The still image case is an easier problem because still imaging is much more tolerant to such objects, so a different set of methods can be used, optimizing other image qualities. An added challenge is that a Bayer pattern (red, green, green, blue) can have two different greens with different color sensitivities.
For many cameras, you cannot treat the two greens as being equal. If you do, then with some cameras you get worm patterns. The files won’t tell you those greens are not the same, you find out yourselves when doing debayering. Ideally, you want to know why and when these worm patterns will appear, but the camera manufacturers do not usually provide that info.
Each camera has its own set of challenges so you rarely treat the cameras equally. A camera vendor might have a debayer method optimized for its camera, minimizing artifacts, enhancing sharpness, etc. That same method is often not optimal for other cameras. The optimal method is often proprietary and not available to all in post, so post might end up using a more general-purpose method, with fewer optimizations for this camera.”

Asked if Technicolor has different demosaicing processes for each camera, Michael Most explains they have different systems for dailies and for finishing, but Technicolor has settled on Colorfront as a dailies system because their debayering algorithms were always very good. Technicolor has a color science department with people constantly discussing these issues, and as a DI company, it was necessary to get one common platform to go through. The DI’s are not done from original files because the DI’s that Technicolor works on have very heavy visual effects content so they are going to be RGB files by definition.

  • DeBayer  : The demosaicing stage synthesizes RGB image pixels from the colors of the individual color filtered photosites. Bayer (RG/GB) is the most common layout for a color filter mosaic but since other color filter layouts are in use, the general name for this process is demosaicing (often loosely referred to as de-Bayering). Color information from all directly adjacent photosites is considered mathematically when converting discrete monochrome Red only, Green only, and Blue only photosites into RGB pixels, synthetically creating interpreted RGB values for each photosite. The de-Bayer reconstruction process mathematically generates and assigns full color RGB values to all of these single color value photosites based on color information interpreted from neighboring photosites using a wide variety of mathematical interpretations and is frequently trade secret information from camera to camera and from software package to package.
  • Debayering  : At some point in a workflow that uses Bayer or CFA sensors, the DeBayer operation must be performed. It is up to your knowledge of the post-production tools and their capabilities to determine when to perform the DeBayer operation. A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. Its particular arrangement of color filters is used in most single-chip digital image sensors used in digital cameras, camcorders, and scanners to create a color image. In the Bayer matrix, named after Dr. Bryce Bayer of Eastman Kodak, a single color filter is placed over each pixel in the mosaic pattern ; the process by which this image gets decoded from a color matrix (or “mosaic”) into a full-resolution color image is called “debayering” or “demosaicing”. There are many ways of reconstructing the color image, ranging from the simple to the exceedingly complex.
  • Arri’s technical site reads  : “ArriRaw images (like all ’camera raw’ images) have only one channel. A color reconstruction algorithm calculates the missing components of each pixel based on the type and position of the array of colored filters on the camera sensor. The Alexa uses the Bayer pattern color filter array. The term ’color reconstruction’ therefore is also known as ’debayering’. The Bayer pattern filters the light hitting the sensor so that 50% of the sensors photosites are used to represent green, 25% of the photosites represent red and the remaining 25% represent blue.The output quality of the image depends on the debayering algorithm…”

The method adopted for DI is that RAW camera files are not usually used directly for grading in the DI process, they are debayered beforehand. “There is a little bit of compromise on occasion and it’s not always perfect for everything that comes down the pipeline, but it needs to be that way for the creative good. In a large company you need to have certain standards and methods known to the entire team. I’m jumping the gun here, but I would be the first to say I would love to see more open formats, but not all camera manufacturers hand over basic raw information”, says Most.

Philippe Ros maintains that it’s important to consider not only the different types of debayering coming from different types of cameras, but the actual positioning of debayering in the process, especially in 4K. “It’s not something I recommend doing during the grading of a movie. It is not only a technical, but an artistic step in the process which requires time. My experience with the F65 showed me that we have to be sure the debayer process has been correctly managed. I extensively researched and tested debayer and calibration on the F65 for a very long time and my eyes got accustomed to the dynamic range of this camera and its high performance during the grading.
This leads me to conclude that some debayer programs were not correctly processing the RAW files to RGB. Many companies want to provide real time debayer which can save time, but in 4K, the more rapid way is not always better. This has to be done carefully and at a separate time to be sure the reconstruction and color processes are giving you the best possible image, and cinematographers must have some control of that. Generally in postproduction we are talking about debayer, calibration, color process, and sometimes there is a lot of confusion for cinematographers between these words and the different steps during the processes.”

But this is not an absolute : one cannot say this is a proper debayer and that is not a proper debayer because it depends on the particular type of photography at that given moment. For instance, if you are shooting green screen or high detail, or landscapes versus something conventional like a person in a space, you will have a different debayer algorithm and one might be more effective for certain situations than another to make a more realistic image.
“We will sometimes use different debayer approaches for visual effects elements to optimize the images for matte extractions. The goal is to be optimal in every shot in every picture in every project, but if you were to treat every single shot in every single DI individually, every DI would take a year and a half, so there is a practical perspective to it. It has to get done”, says Mike Most.

When establishing the correct place of the debayer in the process, Robert Monaghan contests : “You will be limited to what the camera manufacturer’s recommend. They’ll provide you with arsenals of software and debayer algorithms, but it will all depend on the limitations of the hardware you purchase.
The quality and cost of your hardware is linked with the technology you are using and determines the different capabilities of image performances”. Obviously, some cheaply designed 4K cameras are linked with a certain technology whereas someone with an F65 or an Arri will get a different image performance because they have a different capability.

Lars Borg asserts : “ The ‘best quality’ is generally situation-dependent or subjective. The methods that outperform in details may underperform, for example, for moving wires, green screen, noise, or with other sensors. Other methods might outperform in other scenarios. It would be a challenge to evaluate a method for every possible usage scenario, for every image attribute, and for every camera, as would be necessary in order to determine a method’s worst-case performance or its guaranteed ‘minimum quality’. Developing such a complete and objective evaluation protocol would be a welcome, but a very big undertaking”.

Philippe Ros says he disagrees with Lars Borg’s statement that ‘the best quality is subjective’, or even that the minimum quality can be subjective. “Arri recommends using Colorfront because they have found it works better than their own process and that’s a very positive step, because when a manufacturer suggests a certain method, (or reference) to the labs and cinematographers, we know it will be reliable. It instills confidence and trust. These days, so much pressure is put on cinematographers to work quickly and they are not given the time to test the lab or the camera so having some files provided by the manufacturers could be enormously helpful and time saving.
Believe me, I toured around Europe with the F65 and I saw what was lost by the many labs in these debayering processes and calibrations. In the beginning some rental companies didn’t even want to rent their cameras because no labs were providing a correct debayer and calibration processes for them ! The same thing occurred at the very beginning when Arri RAW files were first used. I personally faced strange situations when working with new versions of Red Epic without having the correct software to de-Bayer the files - I saw my Red Epic dailies with 3 less stops of dynamic range !
So, yes, as cinematographer, I’m really concerned about debayer. I do agree that for each type of sequence, a different debayer can be used to improve the image. No company is able to provide the best tools all the way through the workflow. All the improvements are made by different people, facilities, or labs that have the time and the means to dig into the processes.”

While debayering is individual and unique to each project and part of the creative process for the DP, doing a debayer process requires a tremendous amount of mathematics and can be complicated. What counts most are the results of the different algorithms, but at present, there is no way to rate those results. There are no guidelines for cinematographers to follow that recommend this debayer algorithm is better for this type of photography and that debayer works better on another type.
“Cinematographers would like to know the possible debayer options for the different cameras they use. It would be useful to make a rating scheme with the kind of results to expect with a certain camera, or to suggest a certain algorithm to use for a specific camera under certain circumstances so that they could decide on which way to debayer using a particular camera on a particular project. The problem of ‘secret sauce’ in the labs or with camera and software manufactures evades quantification in a lot of ways”, says David Stump.

The problem is that nobody has found the perfect method yet. “Software developers are continuously trying to find a better way to do this, removing an artifact over here, reducing something over there, but in many cases these are trade-offs. And with over 400 cameras shooting raw, it’s not realistic to evaluate a method with all cameras and all image types”, adds Lars Borg.

“Consistency is what matters most. Many cinematographers don’t even care what those settings on the camera are. All they want is to get to one point, a good base level of settings, and never touch them again. They will light and frame and do everything else around those settings. They can take one other factor out of the mix, because to go to one debayer, then another one and get another result… That’s a tragedy”, says Mitch Gross.
And there are too many moving parts. First is the camera and what’s provided by camera manufacturers to reproduce an image. Then there is the post package that takes the raw data and makes an image that, more often than not, uses a completely different debayer from what’s in the camera for monitoring to process that image. And then, there are DITs or “Data Wranglers” on or near set who are making decisions on how to debayer and transfer the images which will affect everything down the line. “This is the point about the many steps in the workflow that are not only mathematical processes, we also have to mention the importance of a lab as a structure, and that today, many productions think they can get rid of the structure of the laboratory”, says Ros.

Tommaso Vergallo adds : “There is a difference between processing in the lab and doing a debayer on whatever computer is available. If you think that processing a 35mm negative material is equivalent to the debayer process of a digital file, (which I do not), you don’t need a traditional laboratory like in the past. Historically speaking, at that time you had no alternative to the film laboratory, because the process and printing machines and the overall know-how did not exist anywhere else.
Today the machines for processing images (a boosted standard computer with a good graphic card) are available and cheap for everyone on the consumer market. This does not mean that the result is the same everywhere, but the image treatment can be done whenever on the workflow, from the shooting location, to the editing suite or in a home lab.
Of course you still need someone experienced, knowledgeable, and well trained with a sensitive eye to be able, not only to manipulate a computer, but also to judge the differences in quality and workflow. These people are still quite rare, our digital film lab is teaching and training them every day. That’s the reason why I think that our expertise still cannot be by-passed or overlooked. A digital film lab is a neutral place. We don’t sell any particular brands, we still maintain an independent point of view, knowing how to optimize the treatment of whatever existing digital file is coming out of whatever camera”.

“When you lose the structure of the lab, you lose that repository of talented people all in one place, with different levels of expertise that complement each other. It hasn’t been like that for a while now, and it’s never been like that in electronic world”, says Most. We all know there is a problem with electronic cameras - there is no central source of trusted knowledge.
When a problem occurred on film, the film manufacturer had such deep knowledge of their product, they could tell you definitively if it was a baseline scratch, flicker, this camera, that light. When there are issues on electronic cameras, the questions are : is it the camera ?, the sensor ?, the filter or lack of one ?, the physical characteristics ?, the settings ?, the electronics ?, the files ?, the software ?, the computer ? With all of that, who do we go to ?
No one knows, but with all those cheap cameras being dumped on the market, this is going to affect the image performance, color science - the applications that we all depend upon to create these images… and it will only get worse.

It’s surprising today how little people know, even at very high levels, not only about color space but other important areas. They just don’t know the answers and this is running throughout the industry all over the world right now. “It’s human nature. No one wants to admit they don’t know.
You are supposed to be the expert who those non-technical people are turning to. One would hope that these (camera manufacturers, software, firmware, post-production houses, sales or rental facilities) would know a lot about these cameras or their products, and what the plus and minuses are in given situations… Unfortunately, many don’t”, Mitch Gross adds.

These days, it’s impossible to be an expert on everything because there are too many cameras out there that are changing all the time. There is no way to know all the debayer processes for all the cameras.
“Here in Europe, we are convinced that we need some references for these processes to make sure we are getting the best image quality with specific cameras. This doesn’t exist and it is a huge problem. We at the Imago Technical Committee are working on a proposal asking all camera manufacturers to provide material with raw and processed images to give labs and cinematographers to ensure a minimum quality that they themselves recommend for their product“, says Ros.

“I wasn’t aware that different labs had different processes and some cameras do demosaicing inside, others do it outside ; when I start a production, I want to understand beforehand what I can and cannot do. On a recent film, I didn’t get to do tests at the beginning to see what it was going to look like. I was told by the producers that we’d do dailies at this lab (actually a portable lab on location) but they hadn’t “done the deal” yet, so I didn’t know until two weeks before I started the DI, what facility we would use.
Only then did we find out that company had a completely different system and everything was different from what we had done and what we had looked at. On top of that, the editor was pushing his whole agenda in the Avid of what things should look like - the look, what the brightness should be, and he even told me I should shoot on a 5K camera, because he “would need to reframe, get singles, change the image, make close-ups - and this was even before we started shooting”, says Schaefer.

Many complain there is a lack of discipline in the digital world. It might it be an idea to return to the days when the experts - (experienced/statured/knowledgeable/older ?) cinematographers, directors, and producers were employed on projects, and about having finite limitations - like the days when people had only a certain amount of film stock so they had to plan their shots and know what they were doing.
Today, digital cinematographers and directors shoot from the hip saying “Keep rolling, I’ll know it when I see it” and you have endless hours of material on a hard drive (some usable, lots unusable) and it is totally chaotic. There is a lot of people just waiting for their chance to swoop in and take over control.

“Cinematographers want to be creatively involved with their input and viewpoints”, Philippe Ros adds, “We want to go back to the manufactures and get them to open up parts of the system so we can get into them. We know we can change the sharpness on the Alexa RAW process, but we don’t have this possibility on the Sony debayer and we don’t have this in a certain part of the Epic process.
This is a big problem and we’re not talking about technical problems. We are talking about texture, about the look of the image and artistic points of view that is still very much in the domain of the cinematographer”.

“There was a proposal to open the formats that would allow the user to analyze a file in any software they preferred, and there are hundreds of formats, but many vendors decided to go for a private format, which is an extreme example of going to a totally closed system. Opening up the file format so that other applications can be used would allow the cinematographer to select a vendor and to select a Bayer that looks best to them, but that requires a possibility of mix & match and that’s not possible for proprietary files unless they are being published”, added Borg.

“What cinematographers need to understand is that you’re not talking about one gi-normous process that happens on your image. We are talking about several different processes”, says Stump. “There is the debayer process which is independent of the file format, there is the color science involved which is very dependent on the actual hardware, there is the chemical recipe of the color mask dyes used on the sensor and then there is error correction and noise reduction and black subtraction.
Then there are two different places to do the sharpening pass : you can do it to the Bayer pattern sensor data, or you can do it on the RGB images. If you want to sharpen in the debayer process, certain debayer algorithms will sharpen as well as de-Bayer at same time. It is not so much ‘secret sauce’ as there are a gazillion different ways to do things and the combination of each of these millions of little variables will give you a different result. There are too many moving parts”, he maintains.

Mitch Gross says : “Certainly there is not a ‘best’ debayer or a ‘best’ camera, otherwise the others would be out of business. We are in a world of many more variables than before and it is now full of recommended practices rather than any control strictures. The “plus” is more options and the “minus” is loss of control and pitfalls. Progress is good but it can also lead to chaos.
Our company has created scene files : we have four base standard looks and we try to create something as close as possible to them on a variety of cameras because that makes a great starting point. We have profiled 20 different camera models over the years. It’s not exactly ACES, but it is a working step towards getting the cameras to speak similar languages.
Of course if you have one camera with a huge range and another with a limited one, they can’t fully match, and you end up limiting the range of the more capable camera. Some of these cameras allow for literally thousands of permutations with their menu settings, but by using a few base standards, a DP need only worry about adjusting a few settings to tweak the image to their preference. I’d love to work with a few post software companies to get these implemented as matching post tools for RAW cameras”, he says.

Because everything seems to change every six months, the cinematographer is lost in a sea of variables. It’s impossible to keep up with all the changes. It’s not only the refresh/reinvent/firmware update issues, the manufacturers keep changing the paradigm, and we’re not sure it’s because we asked for it. It is good old fashioned capitalism and corporate competition.
Everyone is trying to sell its product as the newest, the best, better than the rest (and making false or exaggerated claims about what their cameras can do). It’s not just the big 3’s firmware and hardware updates but also the additional players that keep showing up-and that’s just the camera manufacturers.
Add to that the color correctors that change the algorithms and other things, and each lab that also tweaks what is given to them by the manufacturers. It is truly a Wild West of innovation, experimentation, and radically varying results.

“I would guess that 95% of the cinematographers out there today aren’t even aware of any of this”, adds Schaefer. “I, too, don’t want to have to even think about 9,500 of those 10,000 adjustments mentioned, but I do want to know what to do if I want to change from look A to look B and bake that in to my files, even my RAW files.
I have tried to navigate one manufacturer’s menus but they drive you crazy because they’re so non-intuitive, and with another’s, once you get into the real menu, there are too many unclear items with unknown results. “Knowledge is power” and allows us to be more in control of our final product.
Because so much is being taken away from us in post, we must try to get what we want up front. We need to educate ourselves to understand all the processes, the math, the many different steps towards what your final look is going to be so that we go into it armed with knowledge and understanding about how it works, and then whatever we can affect, or if we can affect it, go to a lab and do some testing beforehand. We need some guidance, some standards and references to know where we are and where we’ll end up”, concludes Schaefer.

Tommaso Vergallo punctuating an end of the conversation added :
“The best way to predict the cinematographer’s future is to invent it !”

Madelyn G. Most, August 28, 2013