Nothing generates more discussion on Cloudy Nights or any other astrophotography forum than when someone asks what's the best gain and offset settings are for their particular camera. This is not an easy question to answer and it involves a bit more detailed discussion on gain and offset.
We have to turn all the photons we’ve collected in a pixel into information that we can use to produce the image. This is done via the camera’s Analog to Digital Converter or ADC. The captured photons are the analog part of the signal that we want to convert to a digital signal; we then send the digital signal data to our computer where we can use PixInsight, Photoshop or whatever image processing program you use to process the data and build it into an image. To do this, the photons in the pixel are converted to electrons. The electrons create a voltage difference that we measure to give us a brightness and intensity value. This process happens when the pixels are being read off by the ADC.
The ADC is programmed with levels that correspond to the bit depth of the ADC. This ADC bit depth is the number of possible gray color values that can be read off; an 8 bit camera has an ADC that can provide 2 raised to the power of 8 or 256 possible color values. A 16 bit camera can provide 2 raised to the power of 16 or 65,536 values. These values are in whole numbers - there are no fractional values for electrons / photons. This process is called quantization and the ADC is no different. The ADC’s bit levels are distinct as well. It takes multiple whole photons / electrons to fill a distinct bit depth level which are then read off as such. Any well depth levels that are only partially filled will be rounded down to the nearest ADC bit level. If you have many more well depth levels than ADC bit levels, your chances of having partially filled pixel well depth levels that are rounded down to the nearest ADC bit level increases. When this happens, you lose the information from the partially filled well depth level. You should also be aware that the quantum efficiency of a camera is not 100% but something much less than that. This means that it takes even more light from our target to reach the camera to make a noticeable difference. None if this is good if we’re trying to get all the data we can from an object in the most efficient manner.
So what happens when add gain? We artificially decrease the well depth by artificially decreasing the size of the pixel bucket. Nothing changes physically but as we reduce the number of well depth levels, we also stretch things to cover the full size of the pixel. When this happens, you create a better match between the number of well depth levels and the ADC bit levels. Each captured electron will correspond to its own ADC bit level and reduce the chance that information will be rounded down and lost. In other words, more photons / electrons are actually captured and read out. This is why it may seem that the camera is more sensitive when you increase the gain, but sensitivity is governed only by the quantum efficiency of the camera which is set by the manufacturer.
So what is the bottom line answer to the question of what gain and offset settings are best? Use the settings that will get you less than or equal to the number of well depth levels as you have ADC bit depth. Use the camera’s Unity Gain setting if you know that as a reference place to start and then refine things as you feel you need to from there.
If you want more detail on this topic, go to the Cloud Break Optics website and read through the Astrophotography Pixel by Pixel Series. Much of the info I've communicated here came from that material. It is an excellent series and will provide you a wealth of information to use to your benefit.