[vlc-devel] [PATCH] video_output: wait half the extra time we have without getting the display_lock
ajanni at videolabs.io
Mon Feb 22 08:33:35 UTC 2021
On Mon, Feb 22, 2021 at 08:46:46AM +0100, Steve Lhomme wrote:
> On 2021-02-20 19:06, Rémi Denis-Courmont wrote:
> > Le lauantaina 20. helmikuuta 2021, 16.54.42 EET Steve Lhomme a écrit :
> > > On 2/20/2021 10:22 AM, Rémi Denis-Courmont wrote:
> > > > Le lauantaina 20. helmikuuta 2021, 11.03.18 EET Steve Lhomme a écrit :
> > > > > On 2/19/2021 4:06 PM, Rémi Denis-Courmont wrote:
> > > > > > Le perjantaina 19. helmikuuta 2021, 15.12.49 EET Steve Lhomme a écrit :
> > > > > > > In an ideal world we would never wait between prepare and display, they
> > > > > > > would have predictable time and we could estimate exactly when we must
> > > > > > > do the prepare to be on-time for the display.
> > > > > >
> > > > > > That's not ideal either. prepare() should rather be called at the
> > > > > > earliest
> > > > > > option, which is to say whence the picture has been filtered and
> > > > > > converted.
> > > > > > Estimations should only be used for display() if at all.
> > > > >
> > > > > I disagree. First the picture is not "filtered" at this stage, only
> > > > > deinterlaced and converted.
> > > >
> > > > That's plain nonsense. At the stage "whence the picture has been filtered
> > > > and converted", the picture has, by definition, been filtered.
> > > >
> > > > > The user filters are applied just before rendering.
> > > >
> > > > That's a bug. It prevents evening out the resources usage (both internally
> > > > and externally with other concurrent processes), and generally being more
> > > > robust against scheduling glitches. It's also incompatible with modern
> > > > video filters and outputs that support queueing.
> > > >
> > > > There is no basis to assume that only deinterlacing requires ahead-of-time
> > > > processing for optimal results. Motion blur is an obvious counter example
> > > > here. Besides, in practice deinterlacing is the only filter (if any) in
> > > > most cases.
> > >
> > > There's a very simple scenario to support this behaviour. Pause the
> > > video and change the user filters from the UI. The render changes. It's
> > > done on the same "pre-rendered" (deinterlace+convert) picture.
> > That's still an exception within an exception. It's very seldom that somebody
> > changes filter settings while paused. In the first place, most people don't use
> > filters, and most filters mostly don't work anymore due to hardware
> > acceleration.
> > And besides, then what? How is this any different from audio, where we do
> You can pause video and still see the rendered result. You can't in audio
> (or you render a short loop of audio buffer and it's just horrible). While
> paused you can resize, crop, change the aspect ratio, change user filters
> and see the result live, in a smooth manner. None of that is possible in
> audio. So let's not compare apples and pears.
To be fair, it's also horrible if the video continue to play
a little while you pressed pause. There have been a lot of
reports regarding that. Users don't expect preceptible latency
for those actions.
The vout thread is just a resampler, because we usually don't
control the screen framerate, so just like in the audio world.
And then, once we compare that, the core is pushing audio
buffers to the audio_output and it's consuming them usually
with its backend running pulling calls corresponding to its
current media time.
More information about the vlc-devel