[vlc-devel] [PATCH] core: add a callback to init/release data for picture pool of opaque formats
robux4 at videolabs.io
Tue Apr 21 16:49:34 CEST 2015
On Tue, Apr 21, 2015 at 4:23 PM, Steve Lhomme <robux4 at videolabs.io> wrote:
> On Tue, Apr 21, 2015 at 4:08 PM, Julian Scheel <julian at jusst.de> wrote:
>> On 21.04.2015 14:00, Rémi Denis-Courmont wrote:
>>> Le 2015-04-21 11:51, Steve Lhomme a écrit :
>>>> In the case of DXVA that would mean delaying the surface allocation
>>>> until the vout can provide it. Why not. But what happens if there's no
>>>> compatible vout ?
>>> If downstream cannot cope with your picture format, you should probably
>>> use another one. That is to say, you should fall back to software
>>>> That sounds like doing the whole decoding/displaying chain backwards
>>>> and that would open a whole big can of worms.
>>> If this is backward, what do you suggest is the forward approach?
>>> You think the decoder should tell the video output which device and
>>> buffers to use, then the video output should tell the application which
>>> window to embed? To me *that* seems backward (if at all possible).
>>>> Anyway it cannot work with DXVA because, if I understand the chaining
>>>> properly, the vout is created when the first frame comes out of the
>>>> decoder. So it needs decoding surfaces before there's a vout.
>>> I thought that Thomas and Julian had lifted that limitation.
>> In fact you don't need to get a picture out of the decoder, but you need to
>> know the format. So you can configure the output format and the vout gets
>> created (see
>> In mmal we get notified about the output format before pictures are actually
>> decoded into memory, which is exactly for that usecase: Allow the host to
>> allocate pictures after starting the decoder.
>> But if DXVA doesn't tell you the format before actually allocating pictures
>> you're indeed lost...
> Thanks. Before we allocate the surfaces we decide which format to use.
> We could stop there and allocate using that format later. That could
False alarm. The way DXVA2 works in avcodec cannot work like that.
When it calls GetFormat (which is called when looking for an AVFrame
to decode to) it needs the surfaces to be allocated. That's before the
The problem is that avcodec is the actual decoder used and it uses
DXVA2 internally. So making the delayed initialization would mean
doing it for avcodec in general. I don't know if this is feasible.
More information about the vlc-devel