[vlc-devel] [PATCH v2 09/20] deinterlace: implement draining of extra pictures
Alexandre Janniaux
ajanni at videolabs.io
Sat Oct 17 14:35:29 CEST 2020
(Please note again that I don't approve this patchset yet or disagree
with it, I just don't even read it yet. I'm only reacting on the model
change).
On Sat, Oct 17, 2020 at 02:33:17PM +0200, Alexandre Janniaux wrote:
> On Fri, Oct 16, 2020 at 06:18:55PM +0300, Rémi Denis-Courmont wrote:
> > Le perjantaina 16. lokakuuta 2020, 10.21.01 EEST Steve Lhomme a écrit :
> > > It seems to me that the async way and the draining way are incompatible.
> >
> > Of course not. They're completely orthogonal at the interface level. Indeed,
> > we jad have asynchronous elements (at least audio outputs) with support for
> > draining for years. The libavcodec plugin is also threaded and capable of
> > draining.
>
> I agree, they are equivalent solution and the only change is that
> exterior pull from filter (ie. call to filter() to request an output)
> becomes interior push from the filter.
>
> I would not call this asynchronous though, and I fear of introducing
> recursion here as it complexifies stacktrace for nothing in C. I had
> such thinking back when I tried alternative loading models for filters
> so as to handle incompatible video context right before display, though
> I completely stopped this track.
>
> I don't mind switching the paradigm for filters, though it would
> feel frustrating to change it just when additional features are
> suggested, but it probably needs a workshop and additional discussion
> before doing this blindly.
>
> In addition, and in particular like previous models, it is especially
> suitable for CPU filters but lacks depth when it comes to opengl
> filters where context would be switched much often and previous context
> restoration would become necessary -- not that it would be a problem in
> itself since it can be handled in a single place with what has been
> submitted so far, but it's not just «magic». In the current model,
> filters can expect to control most of the execution path until the
> picture is returned which gives much more control to the filters and
> simplify their execution.
>
> I've been experimenting theoretical 2-pass designs too, so as to
> provide the asynchronous foundation for the filters which pbuffer
> OpenGL implementation would benefit from by enabling pipelining, but
> it's too sketchy and too complex currently to be of any use.
>
> That been added to the discussion, I think like Steve that a rework of
> the filter design would benefit from not being part of the 4.0 release
> and move it as a 5.0 concern in parallel of buffering rework and maybe
> the start of the work on security sandboxing.
>
> Regards,
> --
> Alexandre Janniaux
> Videolabs
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
More information about the vlc-devel
mailing list