[vlc-devel] [PATCH v2 5/5] dav1d: add DXVA 4:2:0 decoding support

Steve Lhomme robux4 at ycbcr.xyz
Fri Sep 11 17:49:42 CEST 2020


On 2020-09-11 17:21, Rémi Denis-Courmont wrote:
> Le perjantaina 11. syyskuuta 2020, 12.46.09 EEST Steve Lhomme a écrit :
>> Tested on NVIDIA 3090 GPU and Intel Iris Xe Graphics on 8-bit sources.
>>
>> The DXVA decoding is only enabled if the decoder device is set to D3D11VA or
>> DXVA2. If the hardware decoder is not found, we fallback to software
>> decoding. The profile needs to be known on open to use hardware decoding
>> as it requires using a single frame thread, so fallback to software after
>> the open would have impact on performance.
>>
>> It's using an "nvdec_pool" for hardware buffer pools, directly from the
>> nvdec folder.
>>
>> Some code could be shared (in a library) with the other DXVA modules.
>> ---
>>   modules/codec/Makefile.am |   11 +
>>   modules/codec/dav1d.c     | 1250 +++++++++++++++++++++++++++++++++++++
> 
> It's very disturbing that this takes twice as much code as the dav1d patch.
> You'd think a lot of the code should actually be in dav1d, rather than
> duplicated in every dav1d reverse dependency.

That's a design decision from the original author. At first I also 
thought it made more sense to factorize as much code in dav1d as 
possible. But in the end this proposal makes the dav1d layer tiny. It 
doesn't enforce anything on the user side. It makes d3d11 and d3d9 (and 
d3d12) possible without any change on the dav1d API. It allows testing 
different thread scenarii. So I think it's the right layer.

> And it's only DX. I don't know about VDPAU and NVDEC status, but there's also
> VA with AV-1 support.

That's more a concern for the dav1d design. How should the hardware 
acceleration abstraction be done so that it works in all cases 
(something lavc took many iterations to get something common enough). 
It's possible it will be quite different from the current code.


More information about the vlc-devel mailing list