<div dir="ltr"><div><div>We tested subjective quality with this patch, and there is a clear drop in fine details, mostly due to increased skips and merge. We're working to test the following possibilities<br><br></div>a) we always recurse to lower depths, irrespective of skip/merge decision. This will slow down slower and veryslow. <br></div>b) we compare skip/merge cost using thresholds. If skipcost << mergecost, then there's a high likelihood skip will be the lowest RD-cost mode anyway. <br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 5:16 PM, Steve Borho <span dir="ltr"><<a href="mailto:steve@borho.org" target="_blank">steve@borho.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 05/26, Deepthi Nandakumar wrote:<br>
> On Mon, May 25, 2015 at 8:31 PM, <<a href="mailto:ashok@multicorewareinc.com">ashok@multicorewareinc.com</a>> wrote:<br>
><br>
> > # HG changeset patch<br>
> > # User Ashok Kumar Mishra<<a href="mailto:ashok@multicorewareinc.com">ashok@multicorewareinc.com</a>><br>
> > # Date 1432215988 -19800<br>
> > # Thu May 21 19:16:28 2015 +0530<br>
> > # Node ID b11c2f1f8425425cfe190a45c710b65304d07db1<br>
> > # Parent a7bf7a150a705489cb63d0454c59ec599bad8c93<br>
> > analysis: re-order RD 5/6 analysis to do splits before ME or intra<br>
> ><br>
> > This commit changes outputs because splits used to be avoided when an<br>
> > inter or<br>
> > intra mode was chosen without residual coding. This recursion early-out is<br>
> > no<br>
> > longer possible. Only merge without residual (aka skip) can abort<br>
> > recursion.<br>
> ><br>
> > This commit changes the order of analysis such that the four split blocks<br>
> > are<br>
> > analyzed prior to attempting any ME or intra modes. Future commits we will<br>
> > use<br>
> > the knowledge learned during split analysis to avoid unlikely work at the<br>
> > current depth (reducing motion references avoiding unlikely intra,<br>
> > rectangular,<br>
> > asymmetric, and lossless modes)<br>
><br>
> Ok, I've edited this commit message, because this gives the impression<br>
> that the new patch introduces<br>
> less early outs, whereas the new patch now makes early outs more likely.<br>
> Earlier : Abort recursion if Best(Merge, Skip, All Inter, Intra ) is<br>
> a skip mode<br>
> New patch : Abort recursion of Best(Merge Skip) is a skip mode.<br>
><br>
> We should do some subjective quality testing before we push this in. Is<br>
> this making skips more likely and blurring the video?<br>
<br>
</div></div>FWIW: skips are normally not blurry, but I agree with subjective testing<br>
of the changes.<br>
<div class="HOEnZb"><div class="h5"><br>
> > diff -r a7bf7a150a70 -r b11c2f1f8425 source/encoder/analysis.cpp<br>
> > --- a/source/encoder/analysis.cpp Fri May 22 14:29:35 2015 +0530<br>
> > +++ b/source/encoder/analysis.cpp Thu May 21 19:16:28 2015 +0530<br>
> > @@ -1170,14 +1170,72 @@<br>
> > }<br>
> > }<br>
> ><br>
> > + bool foundSkip = false;<br>
> > + /* Step 1. Evaluate Merge/Skip candidates for likely early-outs */<br>
> > if (mightNotSplit)<br>
> > {<br>
> > md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> > md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> > checkMerge2Nx2N_rd5_6(md.pred[PRED_SKIP], md.pred[PRED_MERGE],<br>
> > cuGeom, false);<br>
> > - bool earlySkip = m_param->bEnableEarlySkip && md.bestMode &&<br>
> > !md.bestMode->cu.getQtRootCbf(0);<br>
> > + foundSkip = md.bestMode && !md.bestMode->cu.getQtRootCbf(0);<br>
> > + }<br>
> ><br>
> > - if (!earlySkip)<br>
> > + // estimate split cost<br>
> > + /* Step 2. Evaluate each of the 4 split sub-blocks in series */<br>
> > + if (mightSplit && !foundSkip)<br>
> > + {<br>
> > + Mode* splitPred = &md.pred[PRED_SPLIT];<br>
> > + splitPred->initCosts();<br>
> > + CUData* splitCU = &splitPred->cu;<br>
> > + splitCU->initSubCU(parentCTU, cuGeom, qp);<br>
> > +<br>
> > + uint32_t nextDepth = depth + 1;<br>
> > + ModeDepth& nd = m_modeDepth[nextDepth];<br>
> > + invalidateContexts(nextDepth);<br>
> > + Entropy* nextContext = &m_rqt[depth].cur;<br>
> > + int nextQP = qp;<br>
> > +<br>
> > + for (uint32_t subPartIdx = 0; subPartIdx < 4; subPartIdx++)<br>
> > + {<br>
> > + const CUGeom& childGeom = *(&cuGeom + cuGeom.childOffset +<br>
> > subPartIdx);<br>
> > + if (childGeom.flags & CUGeom::PRESENT)<br>
> > + {<br>
> > + m_modeDepth[0].fencYuv.copyPartToYuv(nd.fencYuv,<br>
> > childGeom.absPartIdx);<br>
> > + m_rqt[nextDepth].cur.load(*nextContext);<br>
> > +<br>
> > + if (m_slice->m_pps->bUseDQP && nextDepth <=<br>
> > m_slice->m_pps->maxCuDQPDepth)<br>
> > + nextQP = setLambdaFromQP(parentCTU,<br>
> > calculateQpforCuSize(parentCTU, childGeom));<br>
> > +<br>
> > + compressInterCU_rd5_6(parentCTU, childGeom, zOrder,<br>
> > nextQP);<br>
> > +<br>
> > + // Save best CU and pred data for this sub CU<br>
> > + splitCU->copyPartFrom(nd.bestMode->cu, childGeom,<br>
> > subPartIdx);<br>
> > + splitPred->addSubCosts(*nd.bestMode);<br>
> > + nd.bestMode->reconYuv.copyToPartYuv(splitPred->reconYuv,<br>
> > childGeom.numPartitions * subPartIdx);<br>
> > + nextContext = &nd.bestMode->contexts;<br>
> > + }<br>
> > + else<br>
> > + {<br>
> > + splitCU->setEmptyPart(childGeom, subPartIdx);<br>
> > + zOrder += g_depthInc[g_maxCUDepth - 1][nextDepth];<br>
> > + }<br>
> > + }<br>
> > + nextContext->store(splitPred->contexts);<br>
> > + if (mightNotSplit)<br>
> > + addSplitFlagCost(*splitPred, cuGeom.depth);<br>
> > + else<br>
> > + updateModeCost(*splitPred);<br>
> > +<br>
> > + checkDQPForSplitPred(*splitPred, cuGeom);<br>
> > + }<br>
> > +<br>
> > + /* Step 3. Evaluate ME (2Nx2N, rect, amp) and intra modes at current<br>
> > depth */<br>
> > + if (mightNotSplit)<br>
> > + {<br>
> > + if (m_slice->m_pps->bUseDQP && depth <=<br>
> > m_slice->m_pps->maxCuDQPDepth && m_slice->m_pps->maxCuDQPDepth != 0)<br>
> > + setLambdaFromQP(parentCTU, qp);<br>
> > +<br>
> > + if (!(foundSkip && m_param->bEnableEarlySkip))<br>
> > {<br>
> > md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> > checkInter_rd5_6(md.pred[PRED_2Nx2N], cuGeom, SIZE_2Nx2N);<br>
> > @@ -1263,59 +1321,13 @@<br>
> > addSplitFlagCost(*md.bestMode, cuGeom.depth);<br>
> > }<br>
> ><br>
> > - // estimate split cost<br>
> > - if (mightSplit && (!md.bestMode || !md.bestMode->cu.isSkipped(0)))<br>
> > - {<br>
> > - Mode* splitPred = &md.pred[PRED_SPLIT];<br>
> > - splitPred->initCosts();<br>
> > - CUData* splitCU = &splitPred->cu;<br>
> > - splitCU->initSubCU(parentCTU, cuGeom, qp);<br>
> > -<br>
> > - uint32_t nextDepth = depth + 1;<br>
> > - ModeDepth& nd = m_modeDepth[nextDepth];<br>
> > - invalidateContexts(nextDepth);<br>
> > - Entropy* nextContext = &m_rqt[depth].cur;<br>
> > - int nextQP = qp;<br>
> > -<br>
> > - for (uint32_t subPartIdx = 0; subPartIdx < 4; subPartIdx++)<br>
> > - {<br>
> > - const CUGeom& childGeom = *(&cuGeom + cuGeom.childOffset +<br>
> > subPartIdx);<br>
> > - if (childGeom.flags & CUGeom::PRESENT)<br>
> > - {<br>
> > - m_modeDepth[0].fencYuv.copyPartToYuv(nd.fencYuv,<br>
> > childGeom.absPartIdx);<br>
> > - m_rqt[nextDepth].cur.load(*nextContext);<br>
> > -<br>
> > - if (m_slice->m_pps->bUseDQP && nextDepth <=<br>
> > m_slice->m_pps->maxCuDQPDepth)<br>
> > - nextQP = setLambdaFromQP(parentCTU,<br>
> > calculateQpforCuSize(parentCTU, childGeom));<br>
> > -<br>
> > - compressInterCU_rd5_6(parentCTU, childGeom, zOrder,<br>
> > nextQP);<br>
> > -<br>
> > - // Save best CU and pred data for this sub CU<br>
> > - splitCU->copyPartFrom(nd.bestMode->cu, childGeom,<br>
> > subPartIdx);<br>
> > - splitPred->addSubCosts(*nd.bestMode);<br>
> > - nd.bestMode->reconYuv.copyToPartYuv(splitPred->reconYuv,<br>
> > childGeom.numPartitions * subPartIdx);<br>
> > - nextContext = &nd.bestMode->contexts;<br>
> > - }<br>
> > - else<br>
> > - {<br>
> > - splitCU->setEmptyPart(childGeom, subPartIdx);<br>
> > - zOrder += g_depthInc[g_maxCUDepth - 1][nextDepth];<br>
> > - }<br>
> > - }<br>
> > - nextContext->store(splitPred->contexts);<br>
> > - if (mightNotSplit)<br>
> > - addSplitFlagCost(*splitPred, cuGeom.depth);<br>
> > - else<br>
> > - updateModeCost(*splitPred);<br>
> > -<br>
> > - checkDQPForSplitPred(*splitPred, cuGeom);<br>
> > - checkBestMode(*splitPred, depth);<br>
> > - }<br>
> > + /* compare split RD cost against best cost */<br>
> > + if (mightSplit && !foundSkip)<br>
> > + checkBestMode(md.pred[PRED_SPLIT], depth);<br>
> ><br>
> > /* Copy best data to encData CTU and recon */<br>
> > md.bestMode->cu.copyToPic(depth);<br>
> > - if (md.bestMode != &md.pred[PRED_SPLIT])<br>
> > - md.bestMode->reconYuv.copyToPicYuv(*m_frame->m_reconPic,<br>
> > parentCTU.m_cuAddr, cuGeom.absPartIdx);<br>
> > + md.bestMode->reconYuv.copyToPicYuv(*m_frame->m_reconPic,<br>
> > parentCTU.m_cuAddr, cuGeom.absPartIdx);<br>
> > }<br>
> ><br>
> > /* sets md.bestMode if a valid merge candidate is found, else leaves it<br>
> > NULL */<br>
> > _______________________________________________<br>
> > x265-devel mailing list<br>
> > <a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
> > <a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
> ><br>
<br>
> _______________________________________________<br>
> x265-devel mailing list<br>
> <a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
> <a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Steve Borho<br>
</font></span><div class="HOEnZb"><div class="h5">_______________________________________________<br>
x265-devel mailing list<br>
<a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
<a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
</div></div></blockquote></div><br></div>