[vlc-devel] Vout Workshop

Rémi Denis-Courmont remi at remlab.net
Wed Feb 20 17:52:48 CET 2019


Le keskiviikkona 20. helmikuuta 2019, 18.11.10 EET Steve Lhomme a écrit :
> * Reduce HMD/viewpoint latency

That feels out of scope.

> * Rendering callbacks
> - merge the opengl/d3d APIs ?

As I pointed out before, the attempt to unify them looked more like two mostly 
disjoint sets of APIs in the same structure than a proper unification.

That being so, I don't think it's reasonable to merge them, any more than it 
is reasonable to merge support code for the each embedded window type.

And it seems unlikely that real-life user applications would merge the caller 
side anyway. As far as I know, 3D engines have rather distinct code paths for 
their respective GL, DX, VK backends... so the benefits of merging the API are 
unclear to me.

>   - cannot be switch easily from one to another
>   - common callbacks should remain the same

You don't need to merge the APIs to unify select callback prototypes where 
applicable.

> - handle as many usecases as possible because the API shouldn't change
> in VLC 5.0
>    - support 10 bits rendering (needs to tell the host before displaying)
>    - support PQ/HLG/Linear render output as well (we tell the host what
> might be good, it tells us what it's using)

> * Move the picture lock out of the display pool

We already covered that in the previous two workshops. There are no needs for 
a lock callback with push-buffers unless the decoder implementation is 
schizophrenic.

> - handle the double/triple buffering in each display module

If you mean queueing more than one picture, that is way beyond scope. This has 
non-trivial (to say the least) interactions with buffering, clock and 
subpictures, for which both time and expert(ise) will be missing.

And other than that specific problem, double buffering is an implementation 
detail of each display plugin, and is nothing new in 4.0.

> - handle the copy in the prepare rather than in the core code ? (becomes
> all internal)

That's what prepare was always for - uploading and rendering - and it's been 
there for about a decade.

>    - the core doesn't need to know about zero copy here
>    - blending may still be needed, handled by the display module ?

Yes and in other words there is nothing to do as far as the core is concerned 
(except removing the lock and unlock callbacks completely in the distant 
future). That's more or less how it already works in those video outputs that 
already support push-buffers.

> * What is the date used for in the prepare() ?

The date is the PTS and is needed for asynchronous/queued video outputs.

> * Split the rendering from the display
> - can be used to display() on the display
> - can be used to render into textures for callbacks
> - can be used to render into textures for encoders
> - can handle the blending of SPU
> - can be used for compositing (SBS to mono, frame sequential to SBS)

Display already has been stripped of window management and input events a long 
time ago, and is being stripped of buffer management by push-buffers. All that's 
left is rendering. You cannot split the one thing.

> * Merge 3D and HMD branches into master
> * Integrate the vout inside QML ?

For what purpose? You don't need that to blend a QML OSD on top of the video 
and it duplicates the testing and maintenance efforts.

> * PUSH
(...)

Out of scope.

-- 
雷米‧德尼-库尔蒙
http://www.remlab.net/





More information about the vlc-devel mailing list