[x265] [PATCH Review only] asm: luma_vpp[8x8] in avx2
Divya Manivannan
divya at multicorewareinc.com
Thu Nov 13 05:46:37 CET 2014
Thanks Chen for the comments. I have tried with 2 - rows format also but
the performance is lesser than the 4 - rows format. I will try to change
the algorithm.
Regards,
Divya
On Wed, Nov 12, 2014 at 10:29 PM, chen <chenm003 at 163.com> wrote:
>
>
>
> At 2014-11-12 19:31:57,divya at multicorewareinc.com wrote:
> ># HG changeset patch
> ># User Divya Manivannan
> ># Date 1415791868 -19800
> ># Wed Nov 12 17:01:08 2014 +0530
> ># Node ID 27fdbc2031b6bf3ec46c0daa423c7495b62bac43
> ># Parent 98fb658f3229ab10e808204c265a12e18d71638e
> >asm: luma_vpp[8x8] in avx2
> >
> >diff -r 98fb658f3229 -r 27fdbc2031b6 source/common/x86/ipfilter8.asm
> >--- a/source/common/x86/ipfilter8.asm Tue Nov 11 19:30:19 2014 +0900
> >+++ b/source/common/x86/ipfilter8.asm Wed Nov 12 17:01:08 2014 +0530
> >@@ -3501,6 +3501,118 @@
> > pextrd [r2 + r4], xm1, 3
> > RET
> >
> >+INIT_YMM avx2
> >+%if ARCH_X86_64 == 1
> >+cglobal interp_8tap_vert_pp_8x8, 4,6,13
> >+ mov r4d, r4m
> >+
> >+%ifdef PIC
> >+ lea r5, [tab_LumaCoeff]
> >+ vpbroadcastd m0, [r5 + r4 * 8 + 0]
> >+ vpbroadcastd m8, [r5 + r4 * 8 + 4]
> >+%else
> >+ vpbroadcastd m0, [tab_LumaCoeff + r4 * 8 + 0]
> >+ vpbroadcastd m8, [tab_LumaCoeff + r4 * 8 + 4]
> >+%endif
> >+
> >+ lea r5, [r1 * 3]
> >+ sub r0, r5
> >+ mov r4d, 2
> >+
> >+.loop
> >+ movq xm1, [r0] ; m1 = row 0
> >+ movq xm2, [r0 + r1] ; m2 = row 1
> >+ punpcklbw xm1, xm2 ; m1 = [17 07 16 06 15 05 14 04 13 03 12 02 11 01 10 00]
> >+ lea r5, [r0 + r1 * 2]
>
> r5 is (r1*3), we can reuse it below and reduce many LEA
>
> >+ movq xm3, [r5] ; m3 = row 2
> >+ punpcklbw xm2, xm3 ; m2 = [27 17 26 16 25 15 24 14 23 13 22 12 21 11 20 10]
> >+ movq xm4, [r5 + r1] ; m4 = row 3
> >+ punpcklbw xm3, xm4 ; m3 = [37 27 36 26 35 25 34 24 33 23 32 22 31 21 30 20]
> >+ punpcklwd xm5, xm1, xm3 ; m5 = [33 23 13 03 32 22 12 02 31 21 11 01 30 20 10 00]
> >+ punpckhwd xm6, xm1, xm3 ; m6 = [37 27 17 07 36 26 16 06 35 25 15 05 34 24 14 04]
> >+ lea r5, [r5 + r1 * 2]
> >+ movq xm1, [r5] ; m1 = row 4
> >+ punpcklbw xm4, xm1 ; m4 = [47 37 46 36 45 35 44 34 43 33 42 32 41 31 40 30]
> >+ punpcklwd xm7, xm2, xm4 ; m7 = [43 33 23 13 42 32 22 12 41 31 21 11 40 30 20 10]
> >+ punpckhwd xm2, xm4 ; m2 = [47 37 27 17 46 36 26 16 45 35 25 15 44 34 24 14]
> >+ vinserti128 m5, m5, xm7, 1 ; m5 = [43 33 23 13 42 32 22 12 41 31 21 11 40 30 20 10] - [33 23 13 03 32 22 12 02 31 21 11 01 30 20 10 00]
> >+ vinserti128 m6, m6, xm2, 1 ; m6 = [47 37 27 17 46 36 26 16 45 35 25 15 44 34 24 14] - [37 27 17 07 36 26 16 06 35 25 15 05 34 24 14 04]
> >+ movq xm2, [r5 + r1] ; m2 = row 5
> >+ punpcklbw xm1, xm2 ; m1 = [57 47 56 46 55 45 54 44 53 43 52 42 51 41 50 40]
> >+ punpcklwd xm9, xm3, xm1 ; m9 = [53 43 33 23 52 42 32 22 51 41 31 21 50 40 30 20]
> >+ punpckhwd xm3, xm1 ; m3 = [57 47 37 27 56 46 36 26 55 45 35 25 54 44 34 24]
> >+ lea r5, [r5 + r1 * 2]
> >+ movq xm7, [r5] ; m7 = row 6
> >+ punpcklbw xm2, xm7 ; m2 = [67 57 66 56 65 55 64 54 63 53 62 52 61 51 60 50]
> >+ punpcklwd xm10, xm4, xm2 ; m10 = [63 53 43 33 62 52 42 32 61 51 41 31 60 50 40 30]
> >+ punpckhwd xm4, xm2 ; m4 = [67 57 47 37 66 56 46 36 65 55 45 35 64 54 44 34]
> >+ vinserti128 m9, m9, xm10, 1 ; m9 = [63 53 43 33 62 52 42 32 61 51 41 31 60 50 40 30] - [53 43 33 23 52 42 32 22 51 41 31 21 50 40 30 20]
> >+ vinserti128 m3, m3, xm4, 1 ; m3 = [67 57 47 37 66 56 46 36 65 55 45 35 64 54 44 34] - [57 47 37 27 56 46 36 26 55 45 35 25 54 44 34 24]
> >+ movq xm11, [r5 + r1] ; m11 = row 7
> >+ punpcklbw xm7, xm11 ; m7 = [77 67 76 66 75 65 74 64 73 63 72 62 71 61 70 60]
> >+ punpcklwd xm12, xm1, xm7 ; m12 = [73 63 53 43 72 62 52 42 71 61 51 41 70 60 50 40]
> >+ punpckhwd xm1, xm7 ; m1 = [77 67 57 47 76 66 56 46 75 65 55 45 74 64 54 44]
> >+ lea r5, [r5 + r1 * 2]
> >+ movq xm10, [r5] ; m10 = row 8
> >+ punpcklbw xm11, xm10 ; m11 = [87 77 86 76 85 75 84 74 83 73 82 72 81 71 80 70]
> >+ punpcklwd xm4, xm2, xm11 ; m4 = [83 73 63 53 82 72 62 52 81 71 61 51 80 70 60 50]
> >+ punpckhwd xm2, xm11 ; m2 = [87 77 67 57 86 76 66 56 85 75 65 55 84 74 64 54]
> >+ vinserti128 m12, m12, xm4, 1 ; m12 = [83 73 63 53 82 72 62 52 81 71 61 51 80 70 60 50] - [73 63 53 43 72 62 52 42 71 61 51 41 70 60 50 40]
> >+ vinserti128 m1, m1, xm2, 1 ; m1 = [87 77 67 57 86 76 66 56 85 75 65 55 84 74 64 54] - [77 67 57 47 76 66 56 46 75 65 55 45 74 64 54 44]
> >+
> >+ pmaddubsw m5, m0
> >+ pmaddubsw m6, m0
> >+ pmaddubsw m1, m8
> >+ pmaddubsw m12, m8
> >+ vbroadcasti128 m2, [pw_1]
> >+ pmaddwd m1, m2
> >+ pmaddwd m12, m2
> >+ pmaddwd m5, m2
> >+ pmaddwd m6, m2
> >+ paddd m5, m12
> >+ paddd m6, m1
> >+ packssdw m5, m6
> >+ pmulhrsw m5, [pw_512] ; m5 = word: row 0, row 1
> >+
> >+ movq xm6, [r5 + r1] ; m6 = row 9
> >+ punpcklbw xm10, xm6 ; m10 = [97 87 96 86 95 85 94 84 93 83 92 82 91 81 90 80]
> >+ punpcklwd xm1, xm7, xm10 ; m1 = [93 83 73 63 92 82 72 62 91 81 71 61 90 80 70 60]
> >+ punpckhwd xm7, xm10 ; m7 = [97 87 77 67 96 86 76 66 95 85 75 65 94 84 74 64]
> >+ movq xm12, [r5 + r1 * 2] ; m12 = row 10
> >+ punpcklbw xm6, xm12 ; m6 = [A7 97 A6 96 A5 95 A4 94 A3 93 A2 92 A1 91 A0 90]
> >+ punpcklwd xm2, xm11, xm6 ; m2 = [A3 93 83 73 A2 92 82 72 A1 91 81 71 A0 90 80 70]
> >+ punpckhwd xm11, xm6 ; m11 = [A7 97 87 77 A6 96 86 76 A5 95 85 75 A4 94 84 74]
> >+ vinserti128 m1, m1, xm2, 1 ; m1 = [A3 93 83 73 A2 92 82 72 A1 91 81 71 A0 90 80 70] - [93 83 73 63 92 82 72 62 91 81 71 61 90 80 70 60]
> >+ vinserti128 m7, m7, xm11, 1 ; m7 = [A7 97 87 77 A6 96 86 76 A5 95 85 75 A4 94 84 74] - [97 87 77 67 96 86 76 66 95 85 75 65 94 84 74 64]
> >+
> >+ pmaddubsw m9, m0
> >+ pmaddubsw m3, m0
> >+ pmaddubsw m7, m8
> >+ pmaddubsw m1, m8
> >+ vbroadcasti128 m2, [pw_1]
> >+ pmaddwd m7, m2
> >+ pmaddwd m1, m2
> >+ pmaddwd m9, m2
> >+ pmaddwd m3, m2
> >+ paddd m9, m1
> >+ paddd m3, m7
> >+ packssdw m9, m3
> >+ pmulhrsw m9, [pw_512] ; m9 = word: row 2, row 3
> >+
> >+ packuswb m5, m9 ; m5 = row 0 row 2 row 1 row 3
> >+ vextracti128 xm2, m5, 1 ; m2 = row 1 row 3
> >+ movq [r2], xm5
> >+ movq [r2 + r3], xm2
> >+ lea r2, [r2 + r3 * 2]
> >+ movhps [r2], xm5
> >+ movhps [r2 + r3], xm2
> >+
> >+ lea r2, [r2 + r3 * 2]
> >+ lea r0, [r0 + r1 * 4]
> >+ dec r4d
> >+ jnz .loop
> >+ RET
> >+%endif
>
>
>
> algorithm need modify, I process/transpose 4-rows every time because my demo is 4x4, for 8x8 case, I suggest use 2-rows format to reduce PMADDWD and number of register.
>
>
>
>
> _______________________________________________
> x265-devel mailing list
> x265-devel at videolan.org
> https://mailman.videolan.org/listinfo/x265-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.videolan.org/pipermail/x265-devel/attachments/20141113/0c962a45/attachment-0001.html>
More information about the x265-devel
mailing list