[vlc-devel] [PATCH v2 09/20] deinterlace: implement draining of extra pictures
ajanni at videolabs.io
Thu Oct 15 21:16:42 CEST 2020
On Thu, Oct 15, 2020 at 06:53:55PM +0300, Rémi Denis-Courmont wrote:
> Le torstaina 15. lokakuuta 2020, 18.36.23 EEST Alexandre Janniaux a écrit :
> > I'm not sure I understand your point. I never claimed that
> > this patchset implements GPU filtering, I've only mentioned
> > that moving the backpressure back in the pipeline was good
> > for GPU buffers and my work is not about GPU filtering, but
> > more precisely GPGPU filtering with OpenGL.
> You're splitting hairs. GPU filtering is already feasible in some cases. This
> patchset makes it feasible in a few more cases. But it comes at the cost of an
> increased complexity and asymmetry within the filters - unlike an owner
> callback for output pictures (as we already have for decoders).
I don't think so, I'm not the one bringing this tbh, and I'm
actually using the filter_t pipeline like explained in the RFC
so I really don't follow what you're talking about.
> > Since I actually did use the filter pipeline like I described,
> > I'm quite curious why you are saying that it cannot work in
> > the following sentence.
> > > That's not draining and it cannot be treated the same way.
> I have already explained that multiple times, and I have no alternative
> explanations to give here. I think it is obvious that processing the last
> input and output picture(s) in a stream is not the same as processing further
> output picture(s) with no new input pictures mid-stream.
I never disagreed on this, and actually mentioned it was different
but could also be handled differently.
It also seems completely unrelated to why it cannot work with those
two differences handled with one or two mechanisms, without switching
to a dedicated thread or dedicated fifo for each filters.
> > > For that, you need asynchronous filtering, in other words with owner
> > > callbacks.
> > I also don't understand what kind of backpressure you want to
> > have and which execution model you want with asynchronous
> > filtering.
> The same model and the same motivation as with decoders.
So each filter in its own thread? Why switch the model for the
filters? Maybe Thomas can details the motivation with the
decoder rework like this?
More information about the vlc-devel