[vlc-devel] Software decoding in Hardware buffers

Steve Lhomme robux4 at ycbcr.xyz
Thu Aug 8 14:29:30 CEST 2019


I'm looking at the display pool in the MMAL (Raspberry Pi) code and it 
seems that we currently decode in "hardware" buffers all the time. 
Either the opaque decoder output, or when the decoder outputs I420.

In push we don't to use this pool anymore. The decoder will have its own 
pool and the display just deals with what it receives. In most cases 
that means copy from CPU memory to GPU memory. This doesn't work with a 
SoC like on the Raspberry Pi where the memory is the same and can be 
used directly from both sides.

The idea was that current decoders continue to use decoder_NewPicture() 
as they used to. The pictures will come from the decoder video context, 
if there's one (hardware decoding) or from picture_NewFromFormat() if 
there's none. That means for MMAL we would need to copy this CPU 
allocated memory to the "port allocated" memory (the mechanism to get 
buffers from the display). Given the limited resources that's something 
we should avoid.

I think we should have a third way to provide pictures: from the decoder 
device. In case of software decoding there is no video context, but 
there is a decoder device (A MMAL one in this case).

So I suggests the decoders (and filters) get their output picture from:
- the video context if there is one
- the decoder device if there is one and it has an allocator
- picture_NewFromFormat() otherwise

Any opinion ?

More information about the vlc-devel mailing list