HI,
1st, As we know, for a color image sensor, each pixel only have one color, red or green or blue. Usually, they are putted together in the order that Bayer invented. That's why we need to know a color camera's bayer pattern.
2nd, For a 14bit sensor, it means we can get the value of a pixel with 14 ‘0’ or ‘1'.
3rd, for computer system, usually, we use 8bit, 16bit, 32bit or 64bit to store a data.
Now, we know, for each pixel, we could get a 14bit data of one channel's color. For a resolution, we get (W*H*14) ‘0’ or ‘1'.
If we want to save the data into 8 bit, it means the space is ( W*H*8). So it is not enough. So we have move 6 bit away for each pixel. Usually, we move the low 6 bit away.
If we want to save the data into 16 bit, it means space is (W*H*16), data is not enough, we need to add 2 '0' on the space bit for each pixel.
If we want to save the data into 24 bit, it means space is (W*H*24), but for RGB24, it means each channel of color for RGB only have the 8 bit to save the data, So it should be (W*H*3*8), Now for one pixel, we only have 8bit to save the current pixel's data, have to move 6bit away. And we still need 2 other channes' data. Usually, we will use the nearest pixel's value which has the same color channel. For example, this pixel is red, this pxiel's green value will be the value of its adjacent green pixels, same as the blue value.
Usually, these 3 kinds of data is enough, but sometime, our customer do not want to lost any bit for a pixel, in other word, they want (W*H*16*3), Just like RGB24. I think it is easy to know how to get the value of each pixel's each channel.
At the end, for the length, We need to divide by 8 to convert bits into bytes.
Thanks
Chad