[x264-devel] Bug#667573: Bug#667573: x264: 10 bit builds
Jason Garrett-Glaser
jason at x264.com
Thu Jan 17 20:27:04 CET 2013
On Thu, Jan 17, 2013 at 10:48 AM, Sebastian Dröge
<slomo at circular-chaos.org> wrote:
> On Do, 2013-01-17 at 10:29 -0800, Jason Garrett-Glaser wrote:
>> On Wed, Jan 16, 2013 at 7:59 AM, Sebastian Dröge
>> <slomo at circular-chaos.org> wrote:
>> > On Mi, 2013-01-16 at 16:45 +0100, Nicolas George wrote:
>> >> Le septidi 27 nivôse, an CCXXI, Sebastian Dröge a écrit :
>> >> > Right, but the calling application has no way to know what the library
>> >> > will accept other than looking at x264_config.h.
>> >>
>> >> That is not true:
>> >>
>> >> /* x264_bit_depth:
>> >> * Specifies the number of bits per pixel that x264 uses. This is also the
>> >> * bit depth that x264 encodes in. If this value is > 8, x264 will read
>> >> * two bytes of input data for each pixel sample, and expect the upper
>> >> * (16-x264_bit_depth) bits to be zero.
>> >> * Note: The flag X264_CSP_HIGH_DEPTH must be used to specify the
>> >> * colorspace depth as well. */
>> >> X264_API extern const int x264_bit_depth;
>> >
>> > Thanks, I missed these two in the documentation. FWIW, what's the point
>> > of defining them in x264_config.h too then?
>>
>> People seemed to like the idea of having both. If I had to guess,
>> accessing x264_bit_depth would require running a test program, which
>> isn't possible if you're cross-compiling.
>
> Yeah but instead of checking this during compile time it would make more
> sense to do it during runtime. Otherwise the "replace-library" hack
> won't work again to switch between 8 bit and 10 bit builds.
>
> Btw, what's the reason for making it a compile time parameter and not
> allowing to handle 8/9/10 bit from the same library build?
The main reason is that 8-bit vs >8-bit requires changing the pixel
format from 8-bit to 16-bit. This requires basically completely
changing all pointer calculations in all of x264.
It's possible to handle this at runtime; libav does. Here's what you
have to do:
1. Rename all bit-depth-specific asm functions and the like to avoid
collisions, then runtime-load the correct ones.
2. Make all pixel pointers uint8_t* and manually add a shift factor
to every single address calculation in all of x264.
3. Template all functions where 2) introduces a significant
performance impact (probably most of the core analysis loop).
It's icky, a lot of work, and most importantly, nobody's done it. The
original patch simply changed variable types from uint8_t to "pixel"
(and lots of similar compile-time templating bits), which was already
enough work to be an entire GSOC project. libav does things at
runtime, but it also does 1-2 orders of magnitude less address
calculation and is generally way simpler (after all, it's a decoder,
not an encoder). It needs almost zero addressing outside of DSP
functions; see how relatively little pixel_shift shows up in h264.c.
Jason
More information about the x264-devel
mailing list