[vlc-devel] [PATCH] video_output: wait half the extra time we have without getting the display_lock

Alexandre Janniaux ajanni at videolabs.io
Sun Feb 21 12:16:36 UTC 2021


Hi,

On Sat, Feb 20, 2021 at 08:06:06PM +0200, Rémi Denis-Courmont wrote:
> Le lauantaina 20. helmikuuta 2021, 16.54.42 EET Steve Lhomme a écrit :
> > On 2/20/2021 10:22 AM, Rémi Denis-Courmont wrote:
> > > Le lauantaina 20. helmikuuta 2021, 11.03.18 EET Steve Lhomme a écrit :
> > >> On 2/19/2021 4:06 PM, Rémi Denis-Courmont wrote:
> > >>> Le perjantaina 19. helmikuuta 2021, 15.12.49 EET Steve Lhomme a écrit :
> > >>>> In an ideal world we would never wait between prepare and display, they
> > >>>> would have predictable time and we could estimate exactly when we must
> > >>>> do the prepare to be on-time for the display.
> > >>>
> > >>> That's not ideal either. prepare() should rather be called at the
> > >>> earliest
> > >>> option, which is to say whence the picture has been filtered and
> > >>> converted.
> > >>> Estimations should only be used for display() if at all.
> > >>
> > >> I disagree. First the picture is not "filtered" at this stage, only
> > >> deinterlaced and converted.
> > >
> > > That's plain nonsense. At the stage "whence the picture has been filtered
> > > and converted", the picture has, by definition, been filtered.
> > >
> > >> The user filters are applied just before rendering.
> > >
> > > That's a bug. It prevents evening out the resources usage (both internally
> > > and externally with other concurrent processes), and generally being more
> > > robust against scheduling glitches. It's also incompatible with modern
> > > video filters and outputs that support queueing.
> > >
> > > There is no basis to assume that only deinterlacing requires ahead-of-time
> > > processing for optimal results. Motion blur is an obvious counter example
> > > here. Besides, in practice deinterlacing is the only filter (if any) in
> > > most cases.
> >
> > There's a very simple scenario to support this behaviour. Pause the
> > video and change the user filters from the UI. The render changes. It's
> > done on the same "pre-rendered" (deinterlace+convert) picture.
>
> That's still an exception within an exception. It's very seldom that somebody
> changes filter settings while paused. In the first place, most people don't use
> filters, and most filters mostly don't work anymore due to hardware
> acceleration.
>
> And besides, then what? How is this any different from audio, where we do
> process samples ahead of time (subject to low latency setting) rather than at
> the last minute? You're just confusing latency and frame period here.

I agree with Rémi, though we should probably make both the
potential issues raised and the current design clearer in
a more structured form.

> In most cases, latency can be larger than a period. In fact, some filters do
> require it (if they need to look ahead) already now. The general idea that we
> need to support multiple prepared pictures was already agreed at the workshop
> anyway, AFAIR.
>
> The current simplistic prepare-display API is just a historical artefact from
> the penultimate vout rework by Laurent and myself, for the sole sake of
> limiting the scope of changes back then.

I don't think we agreed to support multiple prepared picture.
We did discuss the display prepare/display lifecycle and even
mentionned a prepare/display/unprepare construction but I'm
not sure what else we discussed.

However, I'm all in for having this kind of support, which
would be like moving some parts of the vout_thread to the
displays. Indeed, among other simplifications and features,
many display could benefit from using the WSI frame
throttling mechanism, instead of relying on abritrary refresh
value and archaic synchronous time measurement.

This probably needs a bit more thoughts than «we must do that»
though, since the different displays have differents requirements
relative to their associated WSI and the way I tell this is quite
broken since it could basically bypass the display_lock.

Ideally, we could even prevent the core from calling prepare
multiple time on the same picture, which in case of avcodec SW
decoding would prevent the uploading of pictures multiple times
(though it's doable currently by comparing the pictures as long
as it's not a different clone each time), and could pipeline the
uploads in an asynchronous way with PBO, like currently done
within the interop_sw but actually useful since we don't pipeline
right now. Also most of the rendering part can probably be
deferred to filters instead of display in most cases, which means
that in most common situations, the prepare/display cycle itself
wouldn't even be an issue.

Regards,
--
Alexandre Janniaux
Videolabs


More information about the vlc-devel mailing list