[vlc-devel] DXVA no CPU access design
remi at remlab.net
Tue Apr 21 20:00:07 CEST 2015
Le 2015-04-21 19:24, Steve Lhomme a écrit :
> It seems that we're turning around discussing solutions that never
> work. So let's summarize the constraints we have to try to find a
> working solution. Unless we code a DXVA2 decoder from scratch (that
> may be a possibility since we have packtizers), we have to deal with
> how avcodec works
> 1/ the avcodec decoder is opened, found to be capable of handling the
> codec, and so is used.
> 2/ It then starts decoding the input stream.
> 3/ When it has enough data to write to an output frame, it first
> to define the output format using the decoder->get_format() calback,
> by providing some possible ones, including the hardware accelerated
> variants. This is where we open our DXVA2 module.
The libavcodec plugin should update ->fmt_out then call
decoder_UpdateVideoFormat(). Currently this is postponed to
ffmpeg_NewPictBuf(). IMU, that's what libavcodec get_format() is meant
for. At that point, the vout (or more generally, the decoder output)
will be initialized. Error checking is also possible.
I am not sure that if all decoder owners implement
->pf_vout_format_update() correctly yet, so that might need to be fixed.
> 4/ If we finds a va decoder, we set it up using vlc_va_Setup()
Again, vlc_va_Setup() is only a work around for limitations of
libavcodec. I am not sure the problem even still exists. Hardware
initialization should be in vlc_va_New().
> At this point the decoder is provided with a `hwaccel_context` that
> contains the list of `surface` it can use to read the decoded frames
> coming out of the va.
> 5/ The decoder decodes the input stream by using the provided
> accelerated surfaces provided in the Setup, via the vlc_va_Get()
> picks which surface is free to use. It gets a surface and look for
> index in its internal buffer in ff_dxva2_get_surface_index()
In principles, the decoder should always call ffmpeg_NewPictBuf() in
->get_buffer(). Currently, it only does so in case of DR.
> 6/ an AVFrame eventually comes out of the video decoder
> The AVFrame planes/surfaces is turned into a picture_t:
> 7/ decoder_NewPicture() is called with the decoder_t
> 8/ which then calls decoder_UpdateVideoFormat() which creates the
> if there's not one already usable.
> 9/ the vout is created in a different thread and the decoder thread
> 10/ the VOUT_CONTROL_INIT control signal is pushed and then the
> decoder thread wait for the control message to be finished handling,
> at which point the vout is created (or won't be). The vout is created
> though a vout_configuration_t structure.
> 11/ a picture_t is created in decoder_NewPicture().
> Because of #5 we cannot go easily around the design of avcodec. It
> needs to know the table of surfaces that will be used by the decoded.
> The vout is only involved after the first AVFrame with a DXVA surface
> comes out of the decoder.
> For these surfaces to be used to render, the D3D9 output needs to use
> the D3D9 handle and the same D3D9 device as the DXVA decoder. So at
> least these 2 objects need to be passed along the way. They are D3D
> objects so the receiving vout can add/release references to these
> objects so there won't be dereferencing issues of these pointers
> during the lifetime of the module(s).
> The main problem we have is how to share these objects in a structure
> that is safe from dereferencing between #8 and #11 (and also when the
> vout is reinited). IMO vout_configuration_t and the pointers inside
> safe during the duration of the vout create/reinit call. The vout
> should not keep a pointer passed through there though.
First I disagree with the notion that it is safe, because of threading,
comparing and copying.
Making the whole thing safe is a futile exercise, because the video
output decides where the video is rendered. The decoder cannot tell the
video output, at least not without breaking the existing GUI and Libvlc.
More information about the vlc-devel