[vlc-devel] [PATCH 12/14] filter_chain: add a default implementation for the video_allocator

Steve Lhomme robux4 at ycbcr.xyz
Tue Jul 30 08:57:24 CEST 2019


Hi,

On 2019-07-29 21:08, Alexandre Janniaux wrote:
> Hi,
> 
> Thank you for tackling these issues, which are probably relatively complex, and
> I'm sorry if my inlined question were out of the scope.
> 
> On Fri, Jul 26, 2019 at 08:18:37AM +0200, Steve Lhomme wrote:
>> On 2019-07-25 19:08, Rémi Denis-Courmont wrote:
>>> Le torstaina 25. heinäkuuta 2019, 13.34.11 EEST Steve Lhomme a écrit :
>>>> At some point NULL should not be allowed there.
>>>
>>> I don't think we should have allocation callbacks in filters at all anymore. We
>>> got rid of them for audio years ago.
>>
>> It cannot work for filters outputting GPU surfaces. You cannot use a generic
>> picture_NewFromFormat here. You have to use at least the (output) video
>> context and some video context specific callback. This callback may be
>> attached to the video context but there might be cases where the filter
>> needs to do its own tweaks (for example in D3D11 you may decide to allocate
>> pictures that are mappable to the CPU or not, in D3D9 decoder and filter
>> surfaces have a different type than display surfaces).
> 
> If all GPU filters are using the video context, it means that you can't have

All filters have a video context, even if it's a "CPU" video context. A 
NULL video context will not be allowed. It wouldn't make sense, just 
like an empty video format doesn't make sense. It's also the best 
candidate to carry the decoder device along the whole filter chain (see 
previous answer).

> opengl filters in a d3d11 video output pipeline for example.
> 
> If I understand this pipeline correctly, the most valid use case for video
> context is for conversion filters, and especially conversion into the format and
> context required by the display.
> 
> Potentially, you could have a d3d11 followed by an openGL filter, then followed
> by the d3d11 video output. In that case, you need to introduce a converter:
> + the d3d11 filter will probably output a DXGI image.
> + the opengl filter would need an imported opengl texture [1] and exports an
> opengl texture, or ideally directly another native handle like a DXGI image. I
> don't really know if there is something else on Windows, but it seems to be like
> GBM on linux.
> + the display will need a DXGI image.

No, D3D11 doesn't deal with DXGI resources, only D3D11 resources (which 
are a higher level and tied to a device).

> [1] https://www.khronos.org/registry/OpenGL/extensions/NV/WGL_NV_DX_interop2.txt

As seen here, there's no DXGI (only a swapchain).

> Writing this, it seems weird on windows, but on Linux you would export the GBM
> buffer as an EGLimage, the EGLimage as a GBM buffer, the GBM buffer as a Vulkan
> memory object, then back as a GBM buffer, and finally import it as a wayland
> buffer.
> 
> With this in mind, you don't need the video context of the display in the first
> and second filter, only in the final converter. And intermediary filters will
> probably need the video context from the next filter to correctly initialize the
> converter.

For converters yes, at some point we have to tell what's the output we 
want so we get the format+context we want.

But as it is push, the first video context constraint comes from the 
input, not the output (see previous post).

> When you have written the converter part, which is pure push model issue, you're
> left with the intermediary video context allocation which is the issue you're
> highlighting, but it seems to me that it's a totally different issue than
> allocating texture from the next filter/next video context, which would be pull
> model.

Yes, at the very least the last filter in a chain must get pictures from 
the outside, not create its own. Then who creates the output pictures of 
intermediate filters I don't have a definitive answer. But as we are 
pushing the input format+context it would seem logical to push the 
corresponding pictures.

>>> First, and it's always been a problem, the last filter in the chain may well
>>> just pass on picture buffers from upstream, without allocating anything. So we
>>> cannot rely on the allocation callback.
>>
>> Not all display modules can deal with pictures they did not allocate (the
>> majority of cases). So either we force a copy there or we allow the last
>> filter to use pictures from the display pool. This is what we do now and I
>> don't think we should change this.
>>
>> There is indeed a problem for filter that pass pictures from the input
>> directly to the output. They don't do filter_NewPicture() and thus bypass
>> the whole "filter chain last filter" trick. For that we may have to keep the
>> ugly picture_pool_OwnPic() function.

This issue would be solved if filters get their output picture from the 
next filter/outside. So maybe that's the better way to do it.

But in the end we still need an allocator in filters. But that will be 
to provide input pictures. And now the problem is reversed, we can't do 
it for the first filter as pictures are allocated by the decoder.

> Maybe the converter part above would help dealing with picture that were not
> allocated from the display. I don't know DXGI enough but all vout display on
> linux are able to deal with resource import, and I think we found a way at the
> last workshop for DXGI.

We don't deal with DXGI, not even OpenGL does.

> Also, with converters, you don't need neither a «filter chain last filter», as
> it is really the role of the converter to be at the end, and you don't need
> picture_pool_OwnPic. If a filter wants to pass a picture buffer with an invalid
> context it is a programmation error. If it pass the previous texture it should
> probably make sure to either import/export it or keep in sync with the previous
> video context and keep it alive. Having DXGI textures/GBM textures makes sure it
> will still be available later and it's only file descriptor/HWND passing.
> 
>>> And, unlike for decoders, we don't need flow control either. None of the filters
>>> return more than O(n) output pictures per input picture. In fact, I think the
>>> "worst" case is just 2 outputs per input (deinterlace).
>>
>> It's true for now as in-flight pictures from filters are always handle late,
>> just before displaying. If we ever want to do some filtering earlier and/or
>> in threads this may change. Plus there's nothing stopping a filter from
>> holding 8 pictures from upstream to display all of them at once.
> 
> While I agree that we probably don't need flow control and that image number is
> probably dominated, the starvation issue seems a real issue to tackle. What are
> the solution against starvation on Windows ? Should we just fail if we got
> starved ?

Right now we do not have this case as the possibilities for GPU filters 
are very limited. For CPU filters we allocate with picture_NewFormat if 
we can't pick from the display (hoping the display can handle it). But 
it can also starve at some point. I don't know what's the best thing to 
do here.

> Regards,
> --
> Alexandre Janniaux
> VideoLabs
> 
>>
>>>> ---
>>>>    src/misc/filter_chain.c | 11 +++++++++++
>>>>    1 file changed, 11 insertions(+)
>>>>
>>>> diff --git a/src/misc/filter_chain.c b/src/misc/filter_chain.c
>>>> index 6a7b8d7a44..d4652ebf93 100644
>>>> --- a/src/misc/filter_chain.c
>>>> +++ b/src/misc/filter_chain.c
>>>> @@ -180,6 +180,11 @@ void filter_chain_Reset( filter_chain_t *p_chain, const
>>>> es_format_t *p_fmt_in, }
>>>>    }
>>>>
>>>> +static picture_t *DefaultNewPicture( filter_t *filter )
>>>> +{
>>>> +    return picture_NewFromFormat( &filter->fmt_out.video );
>>>> +}
>>>> +
>>>>    static filter_t *filter_chain_AppendInner( filter_chain_t *chain,
>>>>        const char *name, const char *capability, config_chain_t *cfg,
>>>>        const es_format_t *fmt_in, const es_format_t *fmt_out )
>>>> @@ -232,6 +237,12 @@ static filter_t *filter_chain_AppendInner(
>>>> filter_chain_t *chain, if( filter->p_module == NULL )
>>>>            goto error;
>>>>
>>>> +    if ( filter->filter_allocator.buffer_new == NULL )
>>>> +    {
>>>> +        // legacy filters not setting a callback to create their output
>>>> pictures +        filter->filter_allocator.buffer_new = DefaultNewPicture;
>>>> +    }
>>>> +
>>>>        if( filter->b_allow_fmt_out_change )
>>>>        {
>>>>            es_format_Clean( &chain->fmt_out );
>>>
>>>
>>> --
>>> Реми Дёни-Курмон
>>> http://www.remlab.net/
>>>
>>>
>>>
>>> _______________________________________________
>>> vlc-devel mailing list
>>> To unsubscribe or modify your subscription options:
>>> https://mailman.videolan.org/listinfo/vlc-devel
>>>
>> _______________________________________________
>> vlc-devel mailing list
>> To unsubscribe or modify your subscription options:
>> https://mailman.videolan.org/listinfo/vlc-devel
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
> 


More information about the vlc-devel mailing list