[vlc-devel] Vout Workshop

Steve Lhomme robux4 at ycbcr.xyz
Wed Feb 20 17:11:10 CET 2019


Hi,

As you know we'll have a vout workshop in Paris this weekend. There is a 
lot to talk about on many topics. So to get kickstarted, get an idea of 
what needs to be talked about and maybe look in the code ahead of the 
meeting, I've collected topics and ideas to talk about.

You may have different ideas or more issues/topics to cover. Feel free 
to list them in this thread.

* Reduce HMD/viewpoint latency

- the display tells when the next VSync is expected
- estimate the viewpoint at the VSync time based on previous values
   - need to know the time when the viewpoint was received
   - need to know the frame rate in case we are way too long and thus 
will skip one or more vsync
   - anyone know a good algorithm for timed interpolation for future 
events ???
   - the interpolation can also be done between current picture and the 
next picture for fast VSync (60 fps on 165 Hz screen) 
(https://github.com/dthpham/butterflow)

* Rendering callbacks
- merge the opengl/d3d APIs ?
  - cannot be switch easily from one to another
  - common callbacks should remain the same
- handle as many usecases as possible because the API shouldn't change 
in VLC 5.0
   - support 10 bits rendering (needs to tell the host before displaying)
   - support PQ/HLG/Linear render output as well (we tell the host what 
might be good, it tells us what it's using)

* Move the picture lock out of the display pool
- handle the double/triple buffering in each display module
- handle the copy in the prepare rather than in the core code ? (becomes 
all internal)
   - the core doesn't need to know about zero copy here
   - blending may still be needed, handled by the display module ?

* What is the date used for in the prepare() ?

* Split the rendering from the display
- can be used to display() on the display
- can be used to render into textures for callbacks
- can be used to render into textures for encoders
- can handle the blending of SPU
- can be used for compositing (SBS to mono, frame sequential to SBS)

* Merge 3D and HMD branches into master
* Integrate the vout inside QML ?

* PUSH
- add a callback in the video context to create pictures
   -> needs a different context at different stages since they may not 
allocate pictures the same way as previous stage
   -> a callback might pick pictures in a preallocated pool so the 
callback needs to be given an opaque
- all video decoders now have their own fixed-size pool, with the same 
fixed size they used to get from the vout (DBP + extra size)
   - the pool is created by a callback in decoder_t that decoders can 
set, if not set a default pool creator is used
   - the decoder still uses decoder_NewPicture() to get pictures from 
this pool
   - decoder_AbortPictures() can still be called by multithreaded 
decoders to unblock decoder threads waiting on the pool
- the decoder is the first to create the video context that goes with 
the pool
   - if the decoder doesn't create a video context, create one for it => 
in decoder_UpdateVideoFormat()
    - since decoder_UpdateVideoFormat() can be called from multiple 
threads we need a lock to create this video context
- differentiate picture copy and picture clone (in the current context)
- video filters get a context on input corresponding to the input video 
format
   -> converter filters can be given the output context as well to adapt 
to the display context
   - we need to get the output video context of a filter chian just like 
we need its output format (filter_chain_GetFmtOut)
- encoders need to handle an input video context to create picture for 
the encoder which may be GPU based
- a lot of hardcoded picture_NewFromFormat() need to be changed to 
create the picture via/using the video context
- the image reader needs to provide a video context on output just like 
it provides a video format (as it may use a GPU decoder)
- the image write needs to handle a video context on input just like it 
uses a video format (as it may come from a GPU decoder)
- picture_Export() needs to take the video context corresponding to the 
given picture as it may be GPU based (and it's using image_Write which 
needs one)
   -> the snapshot needs to provide the video context corresponding to 
the picture it provides
   - we should keep the pictures and video context close in the vout, 
pushing & storing them together
- How do we signal picture metadata changed ?
   - the decoder sends pictures downstream and it's up to the receiver 
to change what it will do with it
   - it may or may not need new filters/converters to handle this new 
picture
   - how/when do we decide we need to change the output sink ?
- the video context needs to release the system resources its using once 
the refcount goes to 0
   - the module that set the callbacks in the context needs to be kept 
alive as long as the context is alive




More information about the vlc-devel mailing list