[vlc-devel] [PATCH 04/31] display: don't store the dummy context in the display anymore
robux4 at ycbcr.xyz
Tue Sep 24 15:28:00 CEST 2019
On 2019-09-24 14:41, Rémi Denis-Courmont wrote:
> Once you go back to CPU, there should be no decoder device anymore.
No "decoder device" for the CPU filters (nor a video context) we agree
> More generally, if you change regime, to CPU or to another GPU API family, you can't expect the decoder device to work. It won't match anyway.
Yes it can, exactly by caching like I explained.
> Of course, you should anyway avoid that situation because performance will fundamentally suck, no matter the VLC core design.
Unlike VLC 3.0, user will finally be able to mix CPU and GPU filters as
much as they want. This is one of the design flaws we're trying to get
I don't see why we want to restrict them to either CPU or GPU not both
or not mixed filters. And we don't have to.
> Le 24 septembre 2019 14:55:04 GMT+03:00, Steve Lhomme <robux4 at ycbcr.xyz> a écrit :
>> On 2019-09-24 11:41, Steve Lhomme wrote:
>>> On 2019-09-23 19:11, Rémi Denis-Courmont wrote:
>>>> Le maanantaina 23. syyskuuta 2019, 18.01.09 EEST Steve Lhomme a
>> écrit :
>>>>> Use the one from the vout thread if it exists.
>>>>> Later the video context will come from the decoder (if any).
>>>> I still don't get why the VD should care/know about the decoder
>>>> at all.
>>>> If even VD knows about the decoder device, then what's the point of
>>>> distinct video context?
>>> The hint has to work both ways. If a VAAPI decoder uses a VADisplay
>>> the VD uses another one it may not work at all (not sure if the same
>>> value would be used if the default display is used in both cases).
>>> With D3D it's the same thing, with external rendering we want the VD
>>> use the D3D device provided by the host (otherwise it just cannot
>>> That means the decoder should also use this device. And it knows
>>> this device/hint through the "decoder device". I agree that the VD
>>> read that external D3D device on itself without using the "decoder
>>> device" at all. Given the "decoder device" may not be created at all
>>> (it's created only on demand which won't happen for AV1 playback for
>>> example) maybe I should not rely on it at all and get the host D3D
>>> device by other means.
>>> (there are also possibilities to use a different D3D device for
>>> and rendering which I have local support for, but that's besides the
>>> So IMO it all comes down to VAAPI and whether the VADisplay value
>>> be created once and used in the decoder and the VD.
>> I forgot an important case. When using a GPU decoder, a CPU filter,
>> a GPU filter and then the display. If we don't store the "decoder
>> device" (in the filter (chain) owner) the CPU>GPU filter will need to
>> create a new "decoder device" when we could use a cached one, matching
>> the one used to initialize the VD (via the video context). We will
>> likely need to recreate a VD when we shouldn't have to.
>>> I think in all cases the VADisplay created for the decoder will be
>>> pushed in the video context when creating the display. So in all
>>> it should match on both sides.
>>> I'll modify my patch (and working branches) accordingly.
>>> vlc-devel mailing list
>>> To unsubscribe or modify your subscription options:
>> vlc-devel mailing list
>> To unsubscribe or modify your subscription options:
> Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma brièveté.
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
More information about the vlc-devel