[vlc-devel] [PATCH] DxVA2: allow NV12 pixel format all the way to the D3D texture

Steve Lhomme robux4 at videolabs.io
Wed Apr 1 15:10:43 CEST 2015


On Wed, Apr 1, 2015 at 2:28 PM, Rémi Denis-Courmont <remi at remlab.net> wrote:
> Le 2015-04-01 14:19, Steve Lhomme a écrit :
>>
>> Do not force YV12 out of DxVA using the costly SplitPlanes as NV12 is the
>> preferred format and will always comes with DxVA and the renderers
>>
>>
>> https://msdn.microsoft.com/en-us/library/windows/desktop/dd206750%28v=vs.85%29.aspx#nv12
>
>
> With the current architecture, the decoder cannot know what is downstream.
> You cannot assume that it is the Direct3D 11 video output. It could be a
> filter (as JB already noted), it could be another video output, it could be
> video splitter or it could even be an encoder (in principles).
>
>> "NV12 is the preferred 4:2:0 pixel format for DirectX VA. It is
>> expected to be an intermediate-term requirement for DirectX VA
>> accelerators supporting 4:2:0 video."
>
>
> I think the preferred texture format for any reasonable video acceleration
> is "opaque". If you want to optimize the data path from the DX VA decoder to
> the DX 3D video output, you should not copy the data back to CPU memory at
> all. The issue of NV12 vs YV12 is thus moot.

That could be done when I add support for ID3D11VideoDevice::CreateVideoDecoder
https://msdn.microsoft.com/en-us/library/windows/desktop/hh447786%28v=vs.85%29.aspx

j-b mentioned that a custom VLC_CODEC_DXVA pixel format could be used
for hardware acceleration, like it's done in Android. But if we want
to use filters that means the DXVA format cannot be opaque.



More information about the vlc-devel mailing list