[vlc-devel] [PATCH] video_output: wait half the extra time we have without getting the display_lock
Steve Lhomme
robux4 at ycbcr.xyz
Mon Feb 22 07:46:46 UTC 2021
On 2021-02-20 19:06, Rémi Denis-Courmont wrote:
> Le lauantaina 20. helmikuuta 2021, 16.54.42 EET Steve Lhomme a écrit :
>> On 2/20/2021 10:22 AM, Rémi Denis-Courmont wrote:
>>> Le lauantaina 20. helmikuuta 2021, 11.03.18 EET Steve Lhomme a écrit :
>>>> On 2/19/2021 4:06 PM, Rémi Denis-Courmont wrote:
>>>>> Le perjantaina 19. helmikuuta 2021, 15.12.49 EET Steve Lhomme a écrit :
>>>>>> In an ideal world we would never wait between prepare and display, they
>>>>>> would have predictable time and we could estimate exactly when we must
>>>>>> do the prepare to be on-time for the display.
>>>>>
>>>>> That's not ideal either. prepare() should rather be called at the
>>>>> earliest
>>>>> option, which is to say whence the picture has been filtered and
>>>>> converted.
>>>>> Estimations should only be used for display() if at all.
>>>>
>>>> I disagree. First the picture is not "filtered" at this stage, only
>>>> deinterlaced and converted.
>>>
>>> That's plain nonsense. At the stage "whence the picture has been filtered
>>> and converted", the picture has, by definition, been filtered.
>>>
>>>> The user filters are applied just before rendering.
>>>
>>> That's a bug. It prevents evening out the resources usage (both internally
>>> and externally with other concurrent processes), and generally being more
>>> robust against scheduling glitches. It's also incompatible with modern
>>> video filters and outputs that support queueing.
>>>
>>> There is no basis to assume that only deinterlacing requires ahead-of-time
>>> processing for optimal results. Motion blur is an obvious counter example
>>> here. Besides, in practice deinterlacing is the only filter (if any) in
>>> most cases.
>>
>> There's a very simple scenario to support this behaviour. Pause the
>> video and change the user filters from the UI. The render changes. It's
>> done on the same "pre-rendered" (deinterlace+convert) picture.
>
> That's still an exception within an exception. It's very seldom that somebody
> changes filter settings while paused. In the first place, most people don't use
> filters, and most filters mostly don't work anymore due to hardware
> acceleration.
>
> And besides, then what? How is this any different from audio, where we do
You can pause video and still see the rendered result. You can't in
audio (or you render a short loop of audio buffer and it's just
horrible). While paused you can resize, crop, change the aspect ratio,
change user filters and see the result live, in a smooth manner. None of
that is possible in audio. So let's not compare apples and pears.
> process samples ahead of time (subject to low latency setting) rather than at
> the last minute? You're just confusing latency and frame period here.
>
> In most cases, latency can be larger than a period. In fact, some filters do
> require it (if they need to look ahead) already now. The general idea that we
> need to support multiple prepared pictures was already agreed at the workshop
> anyway, AFAIR.
I don't remember that at all.
> The current simplistic prepare-display API is just a historical artefact from
> the penultimate vout rework by Laurent and myself, for the sole sake of
> limiting the scope of changes back then.
>
> --
> 雷米‧德尼-库尔蒙
> http://www.remlab.net/
>
>
>
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
>
More information about the vlc-devel
mailing list