For those of us who use One Shot Color CCD, CMOS or DSLR cameras to image, we have to deal with a process called Debayering. But what is this process and why is this process so important to our sub exposures from these kinds of cameras?
In the mid 1970s, a Kodak scientist named Bryce Bayer developed what is known as the Bayer Matrix and patented it in 1976. The Bayer Matrix is basically a set of photosensors or pixels as we call them now whose main characteristic is that these matrix combinations of red, green and blue color filters placed over pixels would see light just like the human eye does. The most common matrix pattern for this is the Red-Green-Green-Blue pattern or RGGB pattern. Since the human retina in the eye is more sensitive to green light, this matrix became the standard matrix for color photosensors.
The raw output of these cameras is formatted so that each pixel is filtered to record only one color. The problem is that the data from each color filtered pixel on its own, can’t specify the values of that color. What you need to do is use an algorithm that will interpolate a set of the surrounding similar red, green or blue pixel values to estimate what the value of a particular color filtered pixel (red, green or blue) should be and outputs this into raw data from the sensor data. This is what happens for those of us who use PixInsight and execute the Debayer process. There are several algorithms available in this process to use which are also called demosaicing methods; the best of these methods to use most of the time is called VNG, or The Variable Number of Gradients methods. This is the default setting in PixInsight’s Debayer process.
The key to using the Debayer process is that you should do this after you’ve calibrated and cosmetically corrected your images. Image calibration (bias, darks and flats with the light frames) is a pixel by pixel process. Each pixel is compared to and manipulated with each of the calibration frames. If we used the Debayer process before we did any calibration, the interpolation done by the Debayering algorithm would ruin our ability to do image calibration.
After debayering your sub exposures, you continue processing as normal to register / align your data and then use image integration to stack them.