[vlc-devel] GSoC 2018 Idea `libplacebo Improvement'

Thomas Guillem thomas at gllm.fr
Wed Mar 21 08:54:00 CET 2018


On Wed, Mar 21, 2018, at 08:33, Steve Lhomme wrote:
> I used to do that with D3D11. It was fine with an integrated (Intel)
> GPU. But with discrete GPU the SW decoders were really slow. Having to
> use memory via the PCI bus. To counter that I pretend to do direct
> rendering and provide CPU memory and during prepare() I copy back to
> the GPU. It's much faster for discrete GPUs and doesn't impact much
> the integrated ones.
Steve,
With OpenGL 4.x, mapped buffer performances are really good with the
discrete GPU I tested (GeForce GTX 660).For AMD GPUs, I had to map the buffer with the following option:
GL_CLIENT_STORAGE_BIT in order to indicate that we prefer the buffer to
be on the CPU side. Cf.
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glBufferStorage.xhtml
Niklas,
Don't forget that we also need feature parity with the OpenGL vout of
VLC. This means: - Interop with VAAPI surfaces. VLC use the
   "EGL_EXT_image_dma_buf_import" extension. I guess we can do the same
   with Vulkan ? Mapping some DMA buffers to a texture ? - Bonus: Interop with cuda ? (It's not done yet on the GL side of VLC) - Give VLC a way to specify its own Vertex Shaders (for 360°,
   VR/3D stuff) .-  And for Metal/MoltenVK: Interop with IOSurface/CVPixelBuffer.


> 
> _________________________________________________
> vlc-devel mailing list
> To unsubscribe or modify your subscription options:
> https://mailman.videolan.org/listinfo/vlc-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.videolan.org/pipermail/vlc-devel/attachments/20180321/4a7d97ac/attachment.html>


More information about the vlc-devel mailing list