[x265] [PATCH] add avx version for chroma_copy_ss 16x4, 16x8, 16x12, 16x16, 16x24, 16x32, 16x64 based on csp, approx 1.5x-2x speedup over SSE

Sagar Kotecha sagar at multicorewareinc.com
Tue Sep 23 15:04:22 CEST 2014


OK.

With Regards,
Sagar

On Tue, Sep 23, 2014 at 2:10 AM, chen <chenm003 at 163.com> wrote:

>
>
>
> At 2014-09-22 21:15:57,sagar at multicorewareinc.com wrote:
> ># HG changeset patch
> ># User Sagar Kotecha sagar at multicorewareinc.com>
> ># Date 1411391728 -19800
> >#      Mon Sep 22 18:45:28 2014 +0530
> ># Node ID 2fb0a3286265a757c94a36cec0695817116d5260
> ># Parent  fd435504f15e0b13dabba9efe0aa94e7047060b5
> >add avx version for chroma_copy_ss 16x4, 16x8, 16x12, 16x16, 16x24, 16x32, 16x64 based on csp, approx 1.5x-2x speedup over SSE
> > <sagar at multicorewareinc.com%3E%3E#%C2%A0Date%C2%A01411391728%C2%A0-19800%3E%23%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0%C2%A0Mon%C2%A0Sep%C2%A022%C2%A018:45:28%C2%A02014%C2%A0+0530%3E%23%C2%A0Node%C2%A0ID%C2%A02fb0a3286265a757c94a36cec0695817116d5260%3E%23%C2%A0Parent%C2%A0%C2%A0fd435504f15e0b13dabba9efe0aa94e7047060b5%3Eadd%C2%A0avx%C2%A0version%C2%A0for%C2%A0chroma_copy_ss%C2%A016x4,%C2%A016x8,%C2%A016x12,%C2%A016x16,%C2%A016x24,%C2%A016x32,%C2%A016x64%C2%A0based%C2%A0on%C2%A0csp,%C2%A0approx%C2%A01.5x-2x%C2%A0speedup%C2%A0over%C2%A0SSE%3E%3Ediff%C2%A0-r%C2%A0fd435504f15e%C2%A0-r%C2%A02fb0a3286265%C2%A0source/common/x86/asm-primitives.cpp>--- a/source/common/x86/blockcopy8.asm	Mon Sep 22 13:14:54 2014 +0530
> >+++ b/source/common/x86/blockcopy8.asm	Mon Sep 22 18:45:28 2014 +0530
> >@@ -2904,6 +2904,46 @@
> > BLOCKCOPY_SS_W16_H4 16, 12
> >
> > ;-----------------------------------------------------------------------------
> >+; void blockcopy_ss_16x4(int16_t *dest, intptr_t deststride, int16_t *src, intptr_t srcstride)
> >+;-----------------------------------------------------------------------------
> >+%macro BLOCKCOPY_SS_W16_H4_avx 2
> >+INIT_YMM avx
> >+cglobal blockcopy_ss_%1x%2, 4, 5, 2
> >+    mov     r4d, %2/4
> >+    add     r1, r1
> >+    add     r3, r3
> >+.loop:
> >+    movu    m0, [r2]
> >+    movu    m1, [r2 + r3]
> >+
> >+    movu    [r0], m0
> >+    movu    [r0 + r1], m1
> >+
> >+    lea     r2, [r2 + 2 * r3]
> >+    lea     r0, [r0 + 2 * r1]
>
> you have more free register, so you may buffer r1*3 and r3*3, to reduce 2 of LEA
>
> >+
> >+    movu    m0, [r2]
> >+    movu    m1, [r2 + r3]
>
> >+    movu    [r0], m0
> >+    movu    [r0 + r1], m1
> >+
> >+    dec     r4d
> >+    lea     r0, [r0 + 2 * r1]
> >+    lea     r2, [r2 + 2 * r3]
>
> after above optimize, you can replace factor to 4 here
>
> >+    jnz     .loop
>
> dec+jnz may reduce 1 uops, it is 'micro fusion' in newer CPU
>
> >+    RET
> >+%endmacro
>
>
> _______________________________________________
> x265-devel mailing list
> x265-devel at videolan.org
> https://mailman.videolan.org/listinfo/x265-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.videolan.org/pipermail/x265-devel/attachments/20140923/e2f672ae/attachment.html>


More information about the x265-devel mailing list