WalterT I don’t know what that means so when I get back home I’ll Google it.
Except for the Foveon sensor, whose sensitivity is poor even for terrestrial photography, there is really no such thing as a color sensor.
Virtually all CCD and CMOS sensors begin life as a monochrome sensor.
In the 1970s, Bryce Bayer of Kodak (back in the days when Kodak was just starting to build digital cameras) came up with a bright idea of how to build color cameras from monochrome sensors -- and that is to deposit micro color filters on top of every pixel. His original patent gave an example of depositing a repeated pattern of 2x2 teeny tiny color filters on top of a monochrome sensor. For the four pixels in the array, he assigned two of them to green, one to red and one to blue (the opposite of what you want to do for astronomy :-) because luma = 0.59G + 0.30R + 0.11B in color TVs; so green is the most important color.
This gives us a repeated 2x2 array of
RG
GB
which is abbreviated RGGB nowadays. There are other derivatives of this with different color sequence, and even a non 2x2 shape (the PenTile Bayer pattern, for example).
When you try to reconstruct the original color image, you use filtering and interpolation techniques to create a synthetic image where each pixel has all three color components, even though each pixel only started with just one color component.
In honor of his invention, we call this reconstruction process today debayering or deBayering. (A more formal name is "demosaicing.")
This is why you cannot approach the resolution of a monochrome camera by using a one-shot digital camera.
If you take a raw "color" image that has not gone through deBayering, you will see this 2x2 patten (Sony sensors that ZWO buys use the RGGB pattern). (Notice that if you invert the image, the pattern becomes GBRG.)
In predominantly red regions, the pixels under the green filters and the blue filters will be very dark. You can see some very dark pixels in your image, and since there is only one for each 2x2 array, I surmise the dark pixels are either red or blue -- i.e, the region you are imaging has either no red content, or they have no blue content. If it is missing green, you will see two dark pixels in a 2x2 array.
See this seminal patent (arguably the most influential patent from the late 20th century -- all the selfie picture takers would still be taking black and white pictures if not for Mr. Bayer (RIP) :-):
https://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=3971065.PN.&OS=PN/3971065&RS=PN/3971065
As usual, Wikipedia is your friend:
https://en.wikipedia.org/wiki/Demosaicing
Chen