Hey,<br><br>I've just been sent a test pattern which was encoded with x264 to 10bit. The original 8bit test pattern has these black and white YCbCr values:<br><br>8 bit black, Y: 16, Cb: 128, Cr: 128<br>
8 bit white: Y: 235, Cb: 128, Cr: 128<br><br>I've now checked the raw decoding output of the 10bit video file encoded with x264 (ffmpeg/libav decoder). I'm getting these values:<br><br>10 bit black, Y: 64, Cb: 514, Cr: 514<br>
10 bit white: Y: 943, Cb: 514, Cr: 514<br><br>The 8bit -> 10bit conversion is clearly incorrect. I guess that the conversion is done by using "/ 255 * 1023". However, the correct conversion would be a simple shift << 2.<br>
<br>As a reference, the BT.709 specification says:<br><br>Black level, 8bit: 16, 10bit: 64<br>
Achromatic, 8bit: 128, 10bit: 512<br>
Nominal peak, 8bit: 235, 10bit: 940<br><br>The incorrect conversion means that peak white is encoded with 943/514/514 instead of the correct 940/512/512. Furthermore, using "/ 255 * 1023" means that a bit of banding is introduced. Both problems would not occur with a simple << 2 shift.<br>
<br>Best regards, madshi.<br>