<div dir="ltr"><br><div class="gmail_quote">---------- Forwarded message ----------<br>From: <b class="gmail_sendername">chen</b> <span dir="ltr"><<a href="mailto:chenm003@163.com">chenm003@163.com</a>></span><br>Date: Fri, Oct 17, 2014 at 3:11 AM<br>Subject: Re: [x265] [PATCH] weight_pp avx2 asm code, improved from 8608.65 cycles to 5138.09 cycles over sse version of asm code<br>To: Development for x265 <<a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a>><br><br><br><div style><div style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"> </div><pre style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"><br>At 2014-10-16 17:20:13,<a href="mailto:praveen@multicorewareinc.com" target="_blank">praveen@multicorewareinc.com</a> wrote:
># HG changeset patch
># User Praveen Tiwari
># Date 1413451199 -19800
># Node ID 858be8d7d7176ab6c6d01cf92d00c8478fe99b34
># Parent 79702581ec824a2a375aebe228d69c3930aeea96
>weight_pp avx2 asm code, improved from 8608.65 cycles to 5138.09 cycles over sse version of asm code
>
>diff -r 79702581ec82 -r 858be8d7d717 source/common/x86/pixel-util8.asm
>--- a/source/common/x86/pixel-util8.asm Wed Oct 15 17:49:35 2014 -0500
>+++ b/source/common/x86/pixel-util8.asm Thu Oct 16 14:49:59 2014 +0530
>@@ -1375,6 +1375,60 @@
>
> RET
>
>+INIT_YMM avx2
>+cglobal weight_pp, 6, 7, 6
>+
>+ mov r6d, r6m
>+ shl r6d, 6 ; m0 = [w0<<6]
>+ movd xm0, r6d
>+
>+ movd xm1, r7m ; m1 = [round]
>+ punpcklwd xm0, xm1
>+ pshufd xm0, xm0, 0
>+ vinserti128 m0, m0, xm0, 1 ; assuming both (w0<<6) and round are using maximum of 16 bits each, m0 = [w0<<6 round]
</pre><pre style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">>>vpbroadcastd is better</pre><pre style><font color="#000000" face="arial"><span style="font-size:14px;line-height:1.7">Yeah, exactly I tried to replace (pshufd xm0, xm0, 0) + (vinserti128 m0, m0, xm0, 1) with vpbroadcastd m0, xm0 (as per document syntax, </span></font><font color="#000066" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">__m256i</font><span style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"> </span><font color="#000000" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">_mm256_broadcastd_epi32</font><font color="#666666" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">
(</font><font color="#000066" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">__m128i</font><span style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"> a</span><font color="#666666" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">)</font><span style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">) but it throwing build </span><font color="#000000" face="arial"><span style="font-size:14px;line-height:23.7999992370605px">error: invalid combination of opcode and operands.</span></font></pre><pre style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">and we just use weight_pp in four position, all of them have same stride in r2 & r3, so we can simplify interface and free more register here, you can combo W0 and Round in general register to improve performance.</pre><span class="" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"><pre> </pre><pre>>+
>+ movd xm1, r8m
>+ vpbroadcastd m2, r9m
>+ mova m5, [pw_1]
>+ sub r2d, r4d
>+ sub r3d, r4d
>+
>+.loopH:
>+ mov r6d, r4d
>+ shr r6d, 4
</pre></span><pre style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">why do Shr every time?</pre><span class="" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"><pre>>+.loopW:
>+ movu xm4, [r0]
>+ pmovzxbw m4, xm4
</pre></span><pre style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">pmovzxbw didn't need aligned address</pre><span class="" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"><pre>>+ punpcklwd m3, m4, m5
>+ pmaddwd m3, m0
>+ psrad m3, xm1
>+ paddd m3, m2
>+
>+ punpckhwd m4, m5
>+ pmaddwd m4, m0
>+ psrad m4, xm1
>+ paddd m4, m2
>+
>+ packssdw m3, m4
>+ vextracti128 xm4, m3, 1
>+ packuswb m3, m4
</pre></span><pre style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7">How about vpermq+packuswb(xm3)?</pre><span class="" style="color:rgb(0,0,0);font-family:arial;font-size:14px;line-height:1.7"><pre>>+ movu [r1], xm3
>+
>+ add r0, 16
>+ add r1, 16
>+
>+ dec r6d
>+ jnz .loopW
>+
>+ lea r0, [r0 + r2]
>+ lea r1, [r1 + r3]
>+
>+ dec r5d
>+ jnz .loopH
>+
>+ RET
</pre></span></div><br>_______________________________________________<br>
x265-devel mailing list<br>
<a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
<a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
<br></div><br></div>