[vlc-devel] [PATCH v2 09/20] deinterlace: implement draining of extra pictures

Steve Lhomme robux4 at ycbcr.xyz
Fri Oct 16 09:21:01 CEST 2020


On 2020-10-15 21:43, Rémi Denis-Courmont wrote:
> Le torstaina 15. lokakuuta 2020, 22.16.42 EEST Alexandre Janniaux a écrit :
>> Hi,
>>> You're splitting hairs. GPU filtering is already feasible in some cases.
>>> This patchset makes it feasible in a few more cases. But it comes at the
>>> cost of an increased complexity and asymmetry within the filters - unlike
>>> an owner callback for output pictures (as we already have for decoders).
>>
>> I don't think so, I'm not the one bringing this tbh, and I'm
>> actually using the filter_t pipeline like explained in the RFC
>> so I really don't follow what you're talking about.
> 
> AFAIU, you brought two complaints against the current video filter interface,
> buffer requirements and processing time. Both of those problems boil down to
> processing breadth first, when you presumably want depth first instead.
> 
> There are three approeaches to do depth first:
> a) This patch adds a new function for further pictures. This is probably the
> most convoluted way.
> b) Separate the input and output function are less convoluted conceptually and
> for the caller. But they needlessly impacts all filters, even those outputting
> 1:1 to their input.
> c) A(n owner) callback takes output pictures, just like with decoders already.
> This is actually naturally suited for depth first processing, since the
> callback can directly invoke the next filter chain element.

It seems to me that the async way and the draining way are incompatible. 
In the former, the receiver is passive, in the latter it's active. It's 
not an issue in the transcoder as time is not (that) critical. It may be 
in low latency scenarii.

It's an issue if the receiver is passive in the vout. The timing is 
critical there and that's why Alexandre had to find ways to avoid 
processing too many pictures at once when it wants to display them in 
time. I don't think this is possible with a purely async system where 
each "source" can push many pictures when it wants, flooding the output.

IMO that's too big a change for 4.0 anyway.

I think the "draining" offers a good alternative. It already proved 
successful in the case Alexandre worked on. The receiver can decide when 
it's ready to receive more. It can also handle the frame dropping, 
theoretically at different stages.

> (...)
>> It also seems completely unrelated to why it cannot work with those
>> two differences handled with one or two mechanisms, without switching
>> to a dedicated thread or dedicated fifo for each filters.
> 
> Nobody said to introduce threads or FIFOs for each filters. It would be
> ridiculous for some of the most trivial filters (e.g. VDPAU's). If a filter
> wants to run in a thread, because it is slow, asynchronous and/or prone to
> parallel computing, it's an implementation detail of that filter.
> 
> Again, same as with threaded and asynchronous decoders. The picture queuing
> callback gives the option. It does not require any change. Existing filters can
> still return their output picture(s), and let the filter chain invoke the
> queuing callback on their behalf.

That sounds like fake async.

But speaking of that, the "drain" I did could as well be handled by 
passing a NULL input picture to check if there's more picture to 
handle/push. Except it means changing *all* video filters to handle that 
case that wasn't really supposed to happen.

And yes, this is not the draining you had in mind, that "finishes" to 
output all the picture that can be, because there's will not be any 
input anymore.

What Alexandre proposed was to add a "discontinuity" flag to the drain 
call to tell it to output *really* everything it can. If we pass NULL to 
the existing Filter callback, we will also need a similar flag in 
addition. At this point the debate is adding an optional callback or 
extending the existing callback for all filters. IMO the less intrusive 
is the extra callback. It's also the more readable as it tells you which 
filter really produce extra pictures (and you can easily deduce 
converters do not and should not).

In both case moving to the "fake" async is just calling one way or the 
other. It makes zero difference. A real async system would be quite 
different.


More information about the vlc-devel mailing list