[vlc-devel] [PATCH v2 14/14] picture: always copy context when cloning
Alexandre Janniaux
ajanni at videolabs.io
Mon Aug 17 16:15:05 CEST 2020
Hi,
On Mon, Aug 17, 2020 at 03:47:47PM +0200, Steve Lhomme wrote:
> On 2020-08-17 11:56, Alexandre Janniaux wrote:
> > Hi,
> >
> > I didn't have the time to extend the previous discussion
> > on Steve patchset, which is merged now, so I'll post remarks
> > here about last patchset and this patch as the subject is
> > quite alike.
> >
> > On Thu, Jul 30, 2020 at 07:10:09PM +0300, Rémi Denis-Courmont wrote:
> > > Le torstaina 30. heinäkuuta 2020, 15.54.29 EEST Alexandre Janniaux a écrit :
> > > > Hi,
> > > >
> > > > I'm not at ease with this patchset, it feels like removing the
> > > > picture_resource_t layer and doing it by hand.
> > >
> > > Leaving aside my minor detail comments, I think the patchset makes sense.
> >
> > I agree that it makes sense, I just fear that it is doing
> > more than what we really need, while breaking existing
> > unmerged code. Push model is already unfinished, decoder
>
> You can't block patches just because it will force you to do a rebase and
> edit your code. Even less if your branch was not even sent for review. I do
> rebases like this all the time, it's part of getting your code ready for
> merge.
I agree, but it doesn't mean there should not be discussion.
In particular, I highlighted that my problem was not with the
patchset itself, but the fact it's a new direction in the middle
of a work in progress.
>
> > devices face strong misunderstanding and imho has both
> > abstraction and specification issues that are still unsolved,
> > we got vdpau and vaapi working back correctly a few days ago
> > only so I'm in particular not at ease with half-changing yet
> > another layer that is in particular used in very different
> > context wihout having the final target in mind, meaning
> > at least re-defining the allocation for both opaque and
> > memory addressable picture (which are not so different),
> > so as to settle this patchset into something understandable
> > instead of just dropping it in the core and have everyone
> > after wonder why it became like that.
> >
> > > The whole point of push-buffers was to uniformise picture buffer allocation.
> > > Everything should be allocated as shared memory that can be manipulated
> > > locally, passed onto compositor with zero copy IPC, and eventually exchanged
> > > with sandboxed decoders orf ilters. picture_NewFromFormat() takes care of
> > > that. Pictures resources should not be needed anymore.
> >
> > I agree, though radical push buffers (in production site and
> > configuration site) does not uniformise with all kind of
> > resources provision, since encoders working with opaque
> > usually require some usage flag set on the buffers, and was
>
> Display modules too. That's why all the D3D11 pictures I create also have
> the ID3D11ShaderResourceView setup even if they are not displayed. In this
> case that's more a speed optimization than a design drawback.
Yes, but on linux you could have different kind of wrapping,
unlike D3D11 pictures. wl_buffer, direct DMA request or maybe
some X11 object (I don't really know this part of the protocol
for X11).
That's not really important since it can be removed.
> > sometimes encapsulated into an object provided by the encoder
> > itself (think mediacodec input surface for example) unless we
> > have stronger requirements and remove compatibility or increase
> > scarce resource usage.
> >
> > I've been working around those questions for some time now, and
> > found different solutions with different level of intrusiveness
> > into the code base.
> >
> > For example, it's still possible to achieve this in the push
> > + configure mode we use by adding converters and having the
> > converters provide the pool to the previous video context
> > depending on the usage after, but it has some issues regarding
> > automatic configuration in the filter pipeline, in particular
> > since the filter allocation is not recursive, thus the filter
> > chain isn't creating the optimal chain.
> >
> > Your patch on module sorting separated from module probe could
> > help design a cleaner solution though.
> >
> > IMHO the filter chain should adapt to the decoder device and
> > fallback to a standard resource kind ("CPU" buffer) if no
> > filters are compatible with it, which is likely the original
> > intent in general since that's what we do for the decoder.
> > But there's still issues regarding how the decoder device is
> > defined and it conflicts with the fact we use the decoder
> > device as a decoder switch.
> >
> > To get back to this patch in particular, I'll also quote the
> > last part of your previous message:
> >
> > > Regardless we still need a mean to wrap opaque picture buffers, which cannot
> > > really use picture_NewFromFormat().
> > >
> > > However...
> > >
> > > > This value previously held vout data in the pull model, if
> > > > I am not wrong. I still use it in scenario where the vout
> > > > binds an object (eg. wl_buffer) to an underlying object in
> > > > the picture (eg. a GBM buffer).
> > >
> > > If we need to handle non-opaque pictures coming from an input peripheral (as
> > > opposed to going to an output peripheral), then we need something like similar
> > > to picture_NewFromResource(). I guess that is your implication?
> >
> > Not directly, but this case need to be handled, like you answered
> > right after. I'm not sure we want a new picture_New* specifically
> > designed to handled the file descriptor, mapping the offset, etc
> > though.
> >
> > I was mentioning cases where an opaque picture goes through the
> > output peripheral with the need to be adapted.
> >
> > In my case in particular, we need to use objects embodying dma
> > sharing requests that can be serialized into what the windowing
> > system expects, so as to bind a picture to the underlying display
> > system.
> >
> > I'm still not completely convinced on whether the dma request
> > should be recreated each time or not, especially with regards to
> > a sandbox, because I still don't know much about related issues
> > but I found platforms on which multiple DMA requests lead to
> > issues, .
> >
> > Anyway, in this situation, sys was used to achieve the binding
>
> Do you mean the p_sys of the picture or the display module sys ?
>
p_sys of the picture, I mostly debate about picture_t in my
previous mail.
> > in the picture and tie the DMA request with the GBM object, but
> > my point in the previous mail was that it could be removed since
> > the buffer could be discriminated against the value of the fd.
> >
> > The point I'd like to clarify is specifically what should be
> > used by developers from now on at the allocation time and how
> > it should be used by following modules.
>
> My patchset doesn't claim that it's the final state of how picture_t should
> be used. It's just a lot of cleanup, moving code around, avoiding casts. In
> fact it doesn't change the picture_context_t vs p_sys usage. They are still
> used as they are now.
>
> > > Not sure this will work. Originally, picture context was specifically meant to
> > > work-around the pool and not do that.
> > >
> > > What is the practical use case, and why can't it use picture_t.sys?
> > >
> >
> > Originally, sys was used to store vout data too so I'm not sure
> > it's relevant here. However there's many place where the modules
> > are creating picture_context and then allocating picture to wrap
> > them.
> >
> > In my case, I have a predefined number of buffers, so instead of
> > creating a pool of picture context, handle their release and reuse
> > and allocate picture each time, I directly create the wrapping
> > picture_t and bind them, and then create a picture_pool with those
> > pictures so as to handle the release/reuse cycle.
> >
> > Thus, the handling of hardware buffers is not really different
> > from software buffers, and actually we could also make software
> > buffers behave the same.
> >
> > Using sys is a distinct solution, which lacks the destroy/copy
> > callbacks but could use gc's features for that. However, it
> > different from what we currently had and Steve patchset overlap
> > with the picture_context concept, hence my reluctance to have
> > his patchset merged before further discussion and choice.
> >
> > This patch fixed a difference in behaviour between normal clone
> > and clones from a picture_pool, so it makes sense to normalize it
> > anyway instead of getting picture with null context and sending them.
>
> In particular right now picture_context_t is not set on pictures coming out
> of picture pool. Which means you can't "prepare" the picture_context_t ahead
> and just clone it when using the picture. That's a design drawback that
> forces a lot of dirty boilerplate code.
>
> > I don't know what's the best way to discuss this though. A third
> > workshop to finish the design details of the push regarding filters
> > and encoders, with this question in the middle because it mainly
> > relates to picture producer in general, would probably be a great
> > thing but given the current situation it might be too hard and
> > unsafe to setup in the short term.
> >
> > I wrote content on the whole OpenGL work we did at Videolabs, I'll
> > try to highlight limits we reached in the current model to make
> > all of this clearer.
> >
> > Regards,
> > --
> > Alexandre Janniaux
> > Videolabs
> >
> > On Fri, Aug 14, 2020 at 06:48:13PM +0300, Rémi Denis-Courmont wrote:
> > > Le perjantaina 14. elokuuta 2020, 15.01.49 EEST Steve Lhomme a écrit :
> > > > From: Alexandre Janniaux <ajanni at videolabs.io>
> > > >
> > >
> > > p...
> > >
> > > > icture_Clone was copying the original picture context into the new
> > > > clone while picture_pool_Clone wasn't. As it is possible to create a
> > > > pool from existing pictures, it's quite handy to be able to associate
> > > > the graphical resources needed for each picture to their context so
> > > > that it can be retrieved after a call to picture_pool_Wait/Get.
> > > >
> > > > The rationale is that because resources like pixels are shared between
> > > > the original picture and the clone, we should probably do the same with
> > > > picture_context and have a unified way for picture producer to
> > > > reference such resources.
> > >
> > > Not sure this will work. Originally, picture context was specifically meant to
> > > work-around the pool and not do that.
> > >
> > > What is the practical use case, and why can't it use picture_t.sys?
> > >
> > > --
> > > レミ・デニ-クールモン
> > > http://www.remlab.net/
> > >
> > >
> > >
> > > _______________________________________________
> > > vlc-devel mailing list
> > > To unsubscribe or modify your subscription options:
> > > https://mailman.videolan.org/listinfo/vlc-devel
> > _______________________________________________
> > vlc-devel mailing list
> > To unsubscribe or modify your subscription options:
> > https://mailman.videolan.org/listinfo/vlc-devel
> >
> _______________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel
More information about the vlc-devel
mailing list