[vlc-devel] [PATCH 12/14] filter_chain: add a default implementation for the video_allocator
Steve Lhomme
robux4 at ycbcr.xyz
Wed Jul 31 10:16:45 CEST 2019
On 2019-07-31 9:52, Thomas Guillem wrote:
>
>
> On Wed, Jul 31, 2019, at 09:50, Steve Lhomme wrote:
>> On 2019-07-30 22:40, Alexandre Janniaux wrote:
>>> Hi,
>>>
>>> On Tue, Jul 30, 2019 at 08:12:47PM +0300, Rémi Denis-Courmont wrote:
>>>> Le tiistaina 30. heinäkuuta 2019, 10.14.17 EEST Steve Lhomme a écrit :
>>>>> On 2019-07-29 21:18, Rémi Denis-Courmont wrote:
>>>>>> Le maanantaina 29. heinäkuuta 2019, 22.08.58 EEST Alexandre Janniaux a
>>>> écrit :
>>>>>>> While I agree that we probably don't need flow control and that image
>>>>>>> number is probably dominated, the starvation issue seems a real issue to
>>>>>>> tackle. What are the solution against starvation on Windows ? Should we
>>>>>>> just fail if we got starved ?
>>>>>>
>>>>>> In general, it is the halting problemm which is to say that the general
>>>>>> case cannot be solved and assumptions have to be made.
>>>>>>
>>>>>> If we need to make assumptions, we might just as well assume that filters
>>>>>> are well-behaved, as indeed they have so far been. Ideally, a filter that
>>>>>> needs to allocate surfaces (rather than modify input surface in-place)
>>>>>> should allocate a suitably large pool for itself. And that turns out
>>>>>> incompatible with filter_NewPicture().
>>>>>
>>>>> I don't see how. This particular patch adds the possibility for filters
>>>>> to provide their own (output) allocator. A filter could very well create
>>>>> a pool and have its allocator pick pictures in that pool. That still
>>>>> goes through filter_NewPicture, because the filter doesn't know if it
>>>>> has to use pictures from the outside or it can use its own.
>>>
>>> I don't really understand how can you use picture from the outside in a GPU
>>> filter ? It doesn't seem to fit the push model too.
>>
>> This is what currently happens for the first filter, it gets picture
>> from the display pool (the private pool that comes from it). Right now
>> this is how it works, the filters ask for a picture and depending where
>> it is in the filter chain it gets it from the display or create from
>> scratch.
>>
>> In push the last filter (likely a converter) may allocate its pictures
>> and push them. I know it can work in D3D (provided they share the same
>> decoder device), maybe OpenGL as well as the OpenGL "converters"
>> (interop) mostly correspond to decoder device types, but I'm less sure
>> for Android or MMAL.
>
> Android HW buffers are fully opaque, you can't do anything with it except display it, no filtering at all.
You can probably "render" them in a surface you have CPU access to ? Via
OpenGL for example.
>>
>>>>
>>>> That makes no sense. The "outside" cannot know if the filter allocates or not.
>>>> Already now, both cases have to work - it has more or less always been that
>>>> way with filters.
>>>>
>>>> So far only converters could be assumed to allocate pictures - and in the push
>>>> model, that would be from the video context, not from the downstream/
>>>> filter_NewPicture().
>>>>
>>>
>>> Just to clarify, do we agree that it has to allocate pictures thanks to the
>>> video context,
>>
>> Yes.
>>
>>> but not from it ?
>>
>> That's up for discussion. In an old push branch I had each filter create
>> its pictures via local code (with a lot of code duplication). In the end
>> it might still be needed. For example if you look at the DXVA2 OpenGL
>> interop, the decoder allocates pictures with
>> IDirectXVideoDecoderService_CreateSurface() while the OpenGL uses
>> IDirect3DDevice9_CreateOffscreenPlainSurface(). They share the same
>> decoder device and the same video context, but the decoder needs a local
>> object to allocate its pictures, it couldn't get them from a generic
>> allocator in the video context. I think that's the worst case scenario
>> we have, the other systems are more flexible (D3D11 doesn't have this
>> problem for example).
>>
>>> Meaning the video context can only provide a
>>> rendering-like context for rendering-like API in the case of filter (be it GPU
>>> or CPU, the CPU case being "allocate your memory that way (shm) or that way
>>> (dynamic memory chunk)") and the memory is allocated by the filter through this
>>> context with the rendering API, which should match the context ?
>>>
>>> For instance, even if I'm not really fond of this model, it would be an openGL
>>> context as a video context, and the filter would use glGenTextures ? Or would
>>> the video context provide a `picture_t pf_AllocatePicture(some args)` ?
>>
>> I don't really have a preference, but we need to decide.
>>
>>>>>
>>>>>> Since the filter chain is (partially) dynamic, we cannot even rely on
>>>>>> filters telling how many "extra" surfaces they need - the total value
>>>>>> could change in the middle of the stream. Instead, the filter ought to
>>>>>> allocate its surfaces during initialization, and potentially refuse to
>>>>>> start if it cannot succeed - but not break the whole pipeline.
>>>>>
>>>>> I'm not sure we can estimate this amount easily. For example what would
>>>>> be the amount an adjust filter has to allocate ?
>>>>
>>>> The adjust filter needs 0 surfaces since it operates in place.
>>>>
>>>> Point being anyway that only the filter can know that, if anything. Since the
>>>> filter chain is dynamic, it cannot be taken into account when the decoder and
>>>> display are set up. It may be that even the filter does not know how many
>>>> pictures it needs, but then there is nothing to fix; it will just maybe or
>>>> maybe not work by allocating on the fly.
>>>
>>> I don't really agree, doing things in place seems to go against the usual
>>> practice with filters. Or do we change this model to add copy filters ?
>>>
>>> In the mean time, if filters are still processed synchronously, every filters
>>> would need only a fixed number of pictures, or at least a number of pictures
>>> known by the filter itself, except maybe the last one which have to take into
>>> account the pictures allocated by the display. Copies can be made by the filters
>>> itself if they absolutely need to keep a picture, and optimization could be made
>>> so as to do late copies later only if the image is to be reused.
>>
>> A while ago I proposed set a flag in pictures to set a flag if it's OK
>> to hold them for a while or not (not meaning you should do a copy
>> instead). Basically the creator the pictures know if it comes from a
>> fixed size pool or not. So if it can allocate a lot of more if it needs
>> to or not. We probably don't need it but we can still add it in the future.
>>
>>> Remi, is the last-filter special-case what you mean with the fact that only
>>> converters are allocating pictures or is it more complex than that ?
>>>
>>> Regards,
>>> --
>>> Alexandre Janniaux
>>> VideoLabs
>>>
>>>
>>>>
>>>> --
>>>> レミ・デニ-クールモン
>>>> http://www.remlab.net/
>>>>
>>>>
>>>> _______________________________________________
>>>> vlc-devel mailing list
>>>> To unsubscribe or modify your subscription options:
>>>> https://mailman.videolan.org/listinfo/vlc-devel
>>> _______________________________________________
>>> vlc-devel mailing list
>>> To unsubscribe or modify your subscription options:
>>> https://mailman.videolan.org/listinfo/vlc-devel
>>>
>> _______________________________________________
>> vlc-devel mailing list
>> To unsubscribe or modify your subscription options:
>> https://mailman.videolan.org/listinfo/vlc-devel
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
>
More information about the vlc-devel
mailing list