[x265] [PATCH 3 of 4] cli: allow -P/--profile to influence requested API bit-depth
Xinyue Lu
maillist at 7086.in
Wed Apr 29 03:28:45 CEST 2015
Thanks for the explanation.
Signaling profile to set bit-depth could be confusing.
As said above, if one want to encode i422 content, then (s)he has to
signal main444 to encode to i422 8bit, and signal main422-10 to encode
to i422 10bit. Does this sound a bit strange? Or just me?
Well, 422p8 seems to be the only exception that doesn't come with a
corresponding profile.
And, is Main444-8 actually a higher profile than Main422-10? [1]
For me, it makes more sense to set the bit depth and let the lowest
profile being chosen by x265, than the opposite way. YMMV though.
[1]: https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding#Profiles
On Tue, Apr 28, 2015 at 6:06 PM, Steve Borho <steve at borho.org> wrote:
> On 04/28, Xinyue Lu wrote:
>
> (apologies for the long mail, but this is a complicated topic)
>
> The --profile argument has historically been treated by x265 much the
> same as --level-idc and --tier in that they are meant to describe the
> capabilities of your (least capable) hardware decoder and the encoder
> will ensure that the resulting bitstream will be decodable by that
> device.
>
> But when the bitstream itself is signaled, x265 always signal the minimal
> requirements possible for that stream so even lower capability decoders
> can decode them.
>
> For example you might specify --profile main422-10 --level-idc 5.1
> --high-tier, telling us that your decoder cannot support anything above
> those specifications, but if you linked against an 8bit encoder and are
> encoding i420 720p video, it will be signaled as Main profile, level 4,
> Main tier. Your user params might lower certain options (like --refs)
> but they will never increase them.
>
> Our CLI does not have the ability to convert color spaces, so the output
> color space is the input color space. I don't see that ever changing for
> us. But the CLI has had the ability to convert from any input pixel
> bitdepth to the internal bitdepth as needed (even dithering when
> reducing bit depth), which I've found very helpful.
>
> We already have 'param->internalBitDepth' which is essentially the same
> as your param->output_depth, but previously it was not configrable via
> x265_param_parse() or by getopt(). It was only configured by
> x265_param_default() and friends, based on which build of libx265 you
> linked against. I don't think we want that to change, the internal bit
> depth is a compile time decision and so it is solely determined by the
> library whose API you are using.
>
> All that said.. we could add --output-depth N to the CLI getopt()
> options and use that instead of --profile to select the output bit depth
> (and by proxy, select the libx265 used), if people think this is better
> than treating --profile like a request for bitdepth. I'm ambivalent
> about it.
>
> No-one has tried to convince me yet that --profile main10-444 should
> mean that we signal main10-444 even if the bitstream will be 8bit i420.
>
> FWIW: apps like ffmpeg will use the encoder's input pixel format to
> select the output bitdepth (via -pix_fmt). So it will be unlikely that
> ffmpeg and our CLI will ever configure output bitdepth in the same way.
>
> --
> Steve Borho
> _______________________________________________
> x265-devel mailing list
> x265-devel at videolan.org
> https://mailman.videolan.org/listinfo/x265-devel
More information about the x265-devel
mailing list