<div dir="ltr"><div>I see Xinyue's confusion about profile influencing bit depth. Another issue is output-depth will be confused with recon-depth. How about --high-bit-depth and --no-high-bit-depth to match directly with HIGH_BIT_DEPTH option used during compile time?<br><br>Is it necessary/useful to give x265 CLI the capability to use a different libx265 build for encode? Can we restrict this feature to be C-interface only?<br><br></div><div>Thanks,<br></div><div>Deepthi<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 29, 2015 at 6:58 AM, Xinyue Lu <span dir="ltr"><<a href="mailto:maillist@7086.in" target="_blank">maillist@7086.in</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thanks for the explanation.<br>
<br>
Signaling profile to set bit-depth could be confusing.<br>
<br>
As said above, if one want to encode i422 content, then (s)he has to<br>
signal main444 to encode to i422 8bit, and signal main422-10 to encode<br>
to i422 10bit. Does this sound a bit strange? Or just me?<br>
<br>
Well, 422p8 seems to be the only exception that doesn't come with a<br>
corresponding profile.<br>
<br>
And, is Main444-8 actually a higher profile than Main422-10? [1]<br>
<br>
For me, it makes more sense to set the bit depth and let the lowest<br>
profile being chosen by x265, than the opposite way. YMMV though.<br>
<br>
[1]: <a href="https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding#Profiles" target="_blank">https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding#Profiles</a><br>
<span class="im HOEnZb"><br>
On Tue, Apr 28, 2015 at 6:06 PM, Steve Borho <<a href="mailto:steve@borho.org">steve@borho.org</a>> wrote:<br>
> On 04/28, Xinyue Lu wrote:<br>
><br>
</span><div class="HOEnZb"><div class="h5">> (apologies for the long mail, but this is a complicated topic)<br>
><br>
> The --profile argument has historically been treated by x265 much the<br>
> same as --level-idc and --tier in that they are meant to describe the<br>
> capabilities of your (least capable) hardware decoder and the encoder<br>
> will ensure that the resulting bitstream will be decodable by that<br>
> device.<br>
><br>
> But when the bitstream itself is signaled, x265 always signal the minimal<br>
> requirements possible for that stream so even lower capability decoders<br>
> can decode them.<br>
><br>
> For example you might specify --profile main422-10 --level-idc 5.1<br>
> --high-tier, telling us that your decoder cannot support anything above<br>
> those specifications, but if you linked against an 8bit encoder and are<br>
> encoding i420 720p video, it will be signaled as Main profile, level 4,<br>
> Main tier. Your user params might lower certain options (like --refs)<br>
> but they will never increase them.<br>
><br>
> Our CLI does not have the ability to convert color spaces, so the output<br>
> color space is the input color space. I don't see that ever changing for<br>
> us. But the CLI has had the ability to convert from any input pixel<br>
> bitdepth to the internal bitdepth as needed (even dithering when<br>
> reducing bit depth), which I've found very helpful.<br>
><br>
> We already have 'param->internalBitDepth' which is essentially the same<br>
> as your param->output_depth, but previously it was not configrable via<br>
> x265_param_parse() or by getopt(). It was only configured by<br>
> x265_param_default() and friends, based on which build of libx265 you<br>
> linked against. I don't think we want that to change, the internal bit<br>
> depth is a compile time decision and so it is solely determined by the<br>
> library whose API you are using.<br>
><br>
> All that said.. we could add --output-depth N to the CLI getopt()<br>
> options and use that instead of --profile to select the output bit depth<br>
> (and by proxy, select the libx265 used), if people think this is better<br>
> than treating --profile like a request for bitdepth. I'm ambivalent<br>
> about it.<br>
><br>
> No-one has tried to convince me yet that --profile main10-444 should<br>
> mean that we signal main10-444 even if the bitstream will be 8bit i420.<br>
><br>
> FWIW: apps like ffmpeg will use the encoder's input pixel format to<br>
> select the output bitdepth (via -pix_fmt). So it will be unlikely that<br>
> ffmpeg and our CLI will ever configure output bitdepth in the same way.<br>
><br>
> --<br>
> Steve Borho<br>
> _______________________________________________<br>
> x265-devel mailing list<br>
> <a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
> <a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
_______________________________________________<br>
x265-devel mailing list<br>
<a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
<a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
</div></div></blockquote></div><br></div>