[vlc-devel] [PATCH v2 17/18] video_output: restart display after filter change

Steve Lhomme robux4 at ycbcr.xyz
Thu Nov 26 08:44:20 CET 2020


On 2020-11-25 16:55, Rémi Denis-Courmont wrote:
> Le mercredi 25 novembre 2020, 09:17:53 EET Steve Lhomme a écrit :
>> On 2020-11-24 16:31, Rémi Denis-Courmont wrote:
>>> This seems to have the exact same problem as previous version. It is very
>>> much intended that we try to adjust the display conversion chain to
>>> adjust to changes in the chroma or video format properties that are not
>>> allowed to change for te display lifetime.
>>>
>>> This allows swapping one conversion filter for another without messing the
>>> display. Restarting the display should only be done as absolute last
>>> resort.
>> I agree. But this is not the job of the converter (be is in
>> osys->converter of filter.chain_interactive) to *cancel* format changes
>> done by a filter. Some simple examples:
>>
>> * deinterlacer outputs half the frame size, a converter will artifically
>> double back that frame size for now reason, when the display with better
>> resizing algo (and using less resources) could do the same.
> 
> On a scaled display, the old scaler from video resolution to window resolution
> will be dropped, and a new scaler from halved resolution gets added. No
> problem there.

I don't understand what you're referring to.

> On a non-scaled display, iIn pull model, we would have needed a new pool. We
> couldn't handle it, and so we ended up keeping the display as it was, and
> inserting a scaler. That's why in 3.0, the transform filter causes a scaler to
> be (incorrectly) added.
> 
> But now in push model, shrinking the resolution is not a problem at all. This
> is just a change of both source A/R and source crop. Increasing the resolution
> might need small fixes in a few dispalys if they make assumptions about i_width
> and i_height, but that's pretty much it - the "physical" format size has no
> real meaning in push model.

It is *not* a source A/R or crop change. The i_width/i_height of the 
filter output is completely different of what the converter+display was 
expecting. It doesn't matter if the display can scale or not. The way it 
picks pixels from the source need to be updated.

Changing the source size is not transparent for "video converter" 
either. For example in D3D I use a staging texture that can be read or 
written by the CPU and then copy to a regular non readable texture for 
the rest of the pipeline. Such a converter will need to know when the 
source changes. And just like the display it's more straightforward to 
recreate that converter rather than checking all of them if they can 
handle such change or not.

>> * adding a GPU filter to a CPU-based pipeline, it will force a GPU to
>> CPU at the end of filter to match the CPU format that was originally
>> given to the display, when in fact it could do a lot better with the GPU
>> source.
> 
> First, CPU filters explicitly refuse to start when the input is in GPU for
> obvious performance reasons. This case is impossible.

This is arbitrary, not impossible.

> And second, even if a filter would accept this, you need two conversions (GPU->
> CPU and CPU->GPU), and that's what you get conversions. The first one will be
> done by the filter chain in front of the SW filter, and the second one will be
> done by the display converter in front of the display.

Except the display may accept the new GPU chroma on input where it had a 
CPU chroma before. So that CPU->GPU would not happen at all. That's what 
discussed at the workshop and said should be done.


More information about the vlc-devel mailing list