<div dir="ltr"><div><div><div>The initial version of this patch from Gopu had dqp-depth. I tried explaining in English how to set it w.r.t max/minCU Size, then decided qgSize is easier to understand. <br></div>I'm not religious about it, we can always change it back. <br><br></div>About partIdx, I meant to check with you/Ashok. Some combination of cuGeom.absPartIdx and depth should be sufficient, but it wasnt working out. Let me take another crack at it, even qp can be avoided in that case. <br><br></div>For some reason, quant QP and search QP are configured separately (Quant::setQPforQuant and Search::setQP).<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 6, 2015 at 12:18 AM, Steve Borho <span dir="ltr"><<a href="mailto:steve@borho.org" target="_blank">steve@borho.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 04/05, <a href="mailto:deepthi@multicorewareinc.com">deepthi@multicorewareinc.com</a> wrote:<br>
> # HG changeset patch<br>
> # User Deepthi Nandakumar <<a href="mailto:deepthi@multicorewareinc.com">deepthi@multicorewareinc.com</a>><br>
> # Date 1427100822 -19800<br>
> # Mon Mar 23 14:23:42 2015 +0530<br>
> # Node ID d6e059bd8a9cd0cb9aad7444b1a141a59ac01193<br>
> # Parent 335c728bbd62018e1e3ed03a4df0514c213e9a4e<br>
> aq: implementation of fine-grained adaptive quantization<br>
><br>
> Currently adaptive quantization adjusts the QP values on 64x64 pixel CodingTree<br>
> units (CTUs) across a video frame. The new param option --qg-size will<br>
> enable QP to be adjusted to individual quantization groups (QGs) of size 64/32/16<br>
><br>
> diff -r 335c728bbd62 -r d6e059bd8a9c doc/reST/cli.rst<br>
> --- a/doc/reST/cli.rst Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/doc/reST/cli.rst Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -1111,6 +1111,13 @@<br>
><br>
> **Range of values:** 0.0 to 3.0<br>
><br>
> +.. option:: --qg-size <64|32|16><br>
> + Enable adaptive quantization for sub-CTUs. This parameter specifies<br>
> + the minimum CU size at which QP can be adjusted, ie. Quantization Group<br>
> + size. Allowed range of values are 64, 32, 16 provided this falls within<br>
> + the inclusive range [maxCUSize, minCUSize]. Experimental.<br>
> + Default: same as maxCUSize<br>
<br>
</span>I can't decide if this should be quant group size or quant group depth - pros and<br>
cons both ways<br>
<span class=""><br>
> .. option:: --cutree, --no-cutree<br>
><br>
> Enable the use of lookahead's lowres motion vector fields to<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/common/cudata.cpp<br>
> --- a/source/common/cudata.cpp Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/common/cudata.cpp Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -298,7 +298,7 @@<br>
> }<br>
><br>
> // initialize Sub partition<br>
> -void CUData::initSubCU(const CUData& ctu, const CUGeom& cuGeom)<br>
> +void CUData::initSubCU(const CUData& ctu, const CUGeom& cuGeom, int qp)<br>
> {<br>
> m_absIdxInCTU = cuGeom.absPartIdx;<br>
> m_encData = ctu.m_encData;<br>
> @@ -312,8 +312,8 @@<br>
> m_cuAboveRight = ctu.m_cuAboveRight;<br>
> X265_CHECK(m_numPartitions == cuGeom.numPartitions, "initSubCU() size mismatch\n");<br>
><br>
> - /* sequential memsets */<br>
> - m_partSet((uint8_t*)m_qp, (uint8_t)ctu.m_qp[0]);<br>
> + m_partSet((uint8_t*)m_qp, (uint8_t)qp);<br>
<br>
</span>longer term, this could probably be simplified. if all CU modes are<br>
evaluated at the same QP, there's no point in setting this value in each<br>
sub-CU. we could derive the CTU's final m_qp[] based on the depth at<br>
each coded CU at the end of analysis; and avoid all these memsets<br>
<span class=""><br>
> m_partSet(m_log2CUSize, (uint8_t)cuGeom.log2CUSize);<br>
> m_partSet(m_lumaIntraDir, (uint8_t)DC_IDX);<br>
> m_partSet(m_tqBypass, (uint8_t)m_encData->m_param->bLossless);<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/common/cudata.h<br>
> --- a/source/common/cudata.h Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/common/cudata.h Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -182,7 +182,7 @@<br>
> static void calcCTUGeoms(uint32_t ctuWidth, uint32_t ctuHeight, uint32_t maxCUSize, uint32_t minCUSize, CUGeom cuDataArray[CUGeom::MAX_GEOMS]);<br>
><br>
> void initCTU(const Frame& frame, uint32_t cuAddr, int qp);<br>
> - void initSubCU(const CUData& ctu, const CUGeom& cuGeom);<br>
> + void initSubCU(const CUData& ctu, const CUGeom& cuGeom, int qp);<br>
> void initLosslessCU(const CUData& cu, const CUGeom& cuGeom);<br>
><br>
> void copyPartFrom(const CUData& cu, const CUGeom& childGeom, uint32_t subPartIdx);<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/common/param.cpp<br>
> --- a/source/common/param.cpp Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/common/param.cpp Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -209,6 +209,7 @@<br>
> param->rc.zones = NULL;<br>
> param->rc.bEnableSlowFirstPass = 0;<br>
> param->rc.bStrictCbr = 0;<br>
> + param->rc.QGSize = 64; /* Same as maxCUSize */<br>
<br>
</span>if this was quantGroupDepth we could configure it as 1 and not care<br>
about preset or CTU size<br>
<div><div class="h5"><br>
> /* Video Usability Information (VUI) */<br>
> param->vui.aspectRatioIdc = 0;<br>
> @@ -263,6 +264,7 @@<br>
> param->rc.aqStrength = 0.0;<br>
> param->rc.aqMode = X265_AQ_NONE;<br>
> param->rc.cuTree = 0;<br>
> + param->rc.QGSize = 32;<br>
> param->bEnableFastIntra = 1;<br>
> }<br>
> else if (!strcmp(preset, "superfast"))<br>
> @@ -279,6 +281,7 @@<br>
> param->rc.aqStrength = 0.0;<br>
> param->rc.aqMode = X265_AQ_NONE;<br>
> param->rc.cuTree = 0;<br>
> + param->rc.QGSize = 32;<br>
> param->bEnableSAO = 0;<br>
> param->bEnableFastIntra = 1;<br>
> }<br>
> @@ -292,6 +295,7 @@<br>
> param->rdLevel = 2;<br>
> param->maxNumReferences = 1;<br>
> param->rc.cuTree = 0;<br>
> + param->rc.QGSize = 32;<br>
> param->bEnableFastIntra = 1;<br>
> }<br>
> else if (!strcmp(preset, "faster"))<br>
> @@ -843,6 +847,7 @@<br>
> OPT2("pools", "numa-pools") p->numaPools = strdup(value);<br>
> OPT("lambda-file") p->rc.lambdaFileName = strdup(value);<br>
> OPT("analysis-file") p->analysisFileName = strdup(value);<br>
> + OPT("qg-size") p->rc.QGSize = atoi(value);<br>
> else<br>
> return X265_PARAM_BAD_NAME;<br>
> #undef OPT<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/encoder/analysis.cpp<br>
> --- a/source/encoder/analysis.cpp Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/encoder/analysis.cpp Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -75,6 +75,8 @@<br>
> m_reuseInterDataCTU = NULL;<br>
> m_reuseRef = NULL;<br>
> m_reuseBestMergeCand = NULL;<br>
> + for (int i = 0; i < NUM_CU_DEPTH; i++)<br>
> + m_qp[i] = NULL;<br>
> }<br>
><br>
> bool Analysis::create(ThreadLocalData *tld)<br>
> @@ -101,6 +103,7 @@<br>
> ok &= md.pred[j].reconYuv.create(cuSize, csp);<br>
> md.pred[j].fencYuv = &md.fencYuv;<br>
> }<br>
> + m_qp[depth] = X265_MALLOC(int, 1i64 << (depth << 1));<br>
<br>
</div></div>checked malloc<br>
<span class=""><br>
> }<br>
><br>
> return ok;<br>
> @@ -118,6 +121,7 @@<br>
> m_modeDepth[i].pred[j].predYuv.destroy();<br>
> m_modeDepth[i].pred[j].reconYuv.destroy();<br>
> }<br>
> + X265_FREE(m_qp[i]);<br>
> }<br>
> }<br>
><br>
> @@ -132,6 +136,34 @@<br>
> m_modeDepth[i].pred[j].invalidate();<br>
> #endif<br>
> invalidateContexts(0);<br>
> + if (m_slice->m_pps->bUseDQP)<br>
> + {<br>
> + CUGeom *curCUGeom = (CUGeom *)&cuGeom;<br>
> + CUGeom *parentGeom = (CUGeom *)&cuGeom;<br>
<br>
</span>these should probably be kept const<br>
<span class=""><br>
> +<br>
> + m_qp[0][0] = calculateQpforCuSize(ctu, *curCUGeom);<br>
> + curCUGeom = curCUGeom + curCUGeom->childOffset;<br>
> + parentGeom = curCUGeom;<br>
> + if (m_slice->m_pps->maxCuDQPDepth >= 1)<br>
> + {<br>
> + for (int i = 0; i < 4; i++)<br>
> + {<br>
> + m_qp[1][i] = calculateQpforCuSize(ctu, *(parentGeom + i));<br>
> + if (m_slice->m_pps->maxCuDQPDepth == 2)<br>
> + {<br>
> + curCUGeom = parentGeom + i + (parentGeom + i)->childOffset;<br>
> + for (int j = 0; j < 4; j++)<br>
> + m_qp[2][i * 4 + j] = calculateQpforCuSize(ctu, *(curCUGeom + j));<br>
> + }<br>
> + }<br>
> + }<br>
> + this->setQP(*m_slice, m_qp[0][0]);<br>
> + m_qp[0][0] = x265_clip3(QP_MIN, QP_MAX_SPEC, m_qp[0][0]);<br>
> + ctu.setQPSubParts((int8_t)m_qp[0][0], 0, 0);<br>
<br>
</span>So all the QPs at every potential sub-CU are known at the start of CTU<br>
compression. Ok.<br>
<div><div class="h5"><br>
> + }<br>
> + else<br>
> + m_qp[0][0] = m_slice->m_sliceQp;<br>
> +<br>
> m_quant.setQPforQuant(ctu);<br>
> m_rqt[0].cur.load(initialContext);<br>
> m_modeDepth[0].fencYuv.copyFromPicYuv(*m_frame->m_fencPic, ctu.m_cuAddr, 0);<br>
> @@ -155,7 +187,7 @@<br>
> uint32_t zOrder = 0;<br>
> if (m_slice->m_sliceType == I_SLICE)<br>
> {<br>
> - compressIntraCU(ctu, cuGeom, zOrder);<br>
> + compressIntraCU(ctu, cuGeom, zOrder, m_qp[0][0], 0);<br>
> if (m_param->analysisMode == X265_ANALYSIS_SAVE && m_frame->m_analysisData.intraData)<br>
> {<br>
> CUData *bestCU = &m_modeDepth[0].bestMode->cu;<br>
> @@ -173,18 +205,18 @@<br>
> * they are available for intra predictions */<br>
> m_modeDepth[0].fencYuv.copyToPicYuv(*m_frame->m_reconPic, ctu.m_cuAddr, 0);<br>
><br>
> - compressInterCU_rd0_4(ctu, cuGeom);<br>
> + compressInterCU_rd0_4(ctu, cuGeom, m_qp[0][0], 0);<br>
><br>
> /* generate residual for entire CTU at once and copy to reconPic */<br>
> encodeResidue(ctu, cuGeom);<br>
> }<br>
> else if (m_param->bDistributeModeAnalysis && m_param->rdLevel >= 2)<br>
> - compressInterCU_dist(ctu, cuGeom);<br>
> + compressInterCU_dist(ctu, cuGeom, m_qp[0][0], 0);<br>
> else if (m_param->rdLevel <= 4)<br>
> - compressInterCU_rd0_4(ctu, cuGeom);<br>
> + compressInterCU_rd0_4(ctu, cuGeom, m_qp[0][0], 0);<br>
> else<br>
> {<br>
> - compressInterCU_rd5_6(ctu, cuGeom, zOrder);<br>
> + compressInterCU_rd5_6(ctu, cuGeom, zOrder, m_qp[0][0], 0);<br>
> if (m_param->analysisMode == X265_ANALYSIS_SAVE && m_frame->m_analysisData.interData)<br>
> {<br>
> CUData *bestCU = &m_modeDepth[0].bestMode->cu;<br>
> @@ -223,7 +255,7 @@<br>
> }<br>
> }<br>
><br>
> -void Analysis::compressIntraCU(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t& zOrder)<br>
> +void Analysis::compressIntraCU(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t& zOrder, int32_t qp, uint32_t partIdx)<br>
> {<br>
> uint32_t depth = cuGeom.depth;<br>
> ModeDepth& md = m_modeDepth[depth];<br>
> @@ -232,6 +264,13 @@<br>
> bool mightSplit = !(cuGeom.flags & CUGeom::LEAF);<br>
> bool mightNotSplit = !(cuGeom.flags & CUGeom::SPLIT_MANDATORY);<br>
><br>
> + if (m_slice->m_pps->bUseDQP && depth && depth <= m_slice->m_pps->maxCuDQPDepth)<br>
> + {<br>
> + qp = m_qp[depth][partIdx];<br>
> + this->setQP(*m_slice, qp);<br>
<br>
</div></div>if we configured quant QP here, we should be able to remove it<br>
everywhere else, yes?<br>
<span class=""><br>
> + qp = x265_clip3(QP_MIN, QP_MAX_SPEC, qp);<br>
> + }<br>
<br>
</span>not sure I see the point of passing in qp here when all you really need<br>
is: else qp = m_qp[0][0];<br>
<br>
Also, isn't partIdx derivable from cuGeom? it would be best if we didn't<br>
add yet another indexing scheme. I still think the zOrder argument is<br>
probably unnecessary.<br>
<div><div class="h5"><br>
> +<br>
> if (m_param->analysisMode == X265_ANALYSIS_LOAD)<br>
> {<br>
> uint8_t* reuseDepth = &m_reuseIntraDataCTU->depth[parentCTU.m_cuAddr * parentCTU.m_numPartitions];<br>
> @@ -241,11 +280,10 @@<br>
><br>
> if (mightNotSplit && depth == reuseDepth[zOrder] && zOrder == cuGeom.absPartIdx)<br>
> {<br>
> - m_quant.setQPforQuant(parentCTU);<br>
> -<br>
> PartSize size = (PartSize)reusePartSizes[zOrder];<br>
> Mode& mode = size == SIZE_2Nx2N ? md.pred[PRED_INTRA] : md.pred[PRED_INTRA_NxN];<br>
> - mode.cu.initSubCU(parentCTU, cuGeom);<br>
> + mode.cu.initSubCU(parentCTU, cuGeom, qp);<br>
> + m_quant.setQPforQuant(<a href="http://mode.cu" target="_blank">mode.cu</a>);<br>
> checkIntra(mode, cuGeom, size, &reuseModes[zOrder], &reuseChromaModes[zOrder]);<br>
> checkBestMode(mode, depth);<br>
><br>
> @@ -262,15 +300,14 @@<br>
> }<br>
> else if (mightNotSplit)<br>
> {<br>
> - m_quant.setQPforQuant(parentCTU);<br>
> -<br>
> - md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> + m_quant.setQPforQuant(md.pred[PRED_INTRA].cu);<br>
> checkIntra(md.pred[PRED_INTRA], cuGeom, SIZE_2Nx2N, NULL, NULL);<br>
> checkBestMode(md.pred[PRED_INTRA], depth);<br>
><br>
> if (cuGeom.log2CUSize == 3 && m_slice->m_sps->quadtreeTULog2MinSize < 3)<br>
> {<br>
> - md.pred[PRED_INTRA_NxN].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA_NxN].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkIntra(md.pred[PRED_INTRA_NxN], cuGeom, SIZE_NxN, NULL, NULL);<br>
> checkBestMode(md.pred[PRED_INTRA_NxN], depth);<br>
> }<br>
> @@ -287,7 +324,7 @@<br>
> Mode* splitPred = &md.pred[PRED_SPLIT];<br>
> splitPred->initCosts();<br>
> CUData* splitCU = &splitPred->cu;<br>
> - splitCU->initSubCU(parentCTU, cuGeom);<br>
> + splitCU->initSubCU(parentCTU, cuGeom, qp);<br>
><br>
> uint32_t nextDepth = depth + 1;<br>
> ModeDepth& nd = m_modeDepth[nextDepth];<br>
> @@ -301,7 +338,7 @@<br>
> {<br>
> m_modeDepth[0].fencYuv.copyPartToYuv(nd.fencYuv, childGeom.absPartIdx);<br>
> m_rqt[nextDepth].cur.load(*nextContext);<br>
> - compressIntraCU(parentCTU, childGeom, zOrder);<br>
> + compressIntraCU(parentCTU, childGeom, zOrder, qp, partIdx * 4 + subPartIdx);<br>
><br>
> // Save best CU and pred data for this sub CU<br>
> splitCU->copyPartFrom(nd.bestMode->cu, childGeom, subPartIdx);<br>
> @@ -490,7 +527,7 @@<br>
> while (task >= 0);<br>
> }<br>
><br>
> -void Analysis::compressInterCU_dist(const CUData& parentCTU, const CUGeom& cuGeom)<br>
> +void Analysis::compressInterCU_dist(const CUData& parentCTU, const CUGeom& cuGeom, int32_t qp, uint32_t partIdx)<br>
> {<br>
> uint32_t depth = cuGeom.depth;<br>
> uint32_t cuAddr = parentCTU.m_cuAddr;<br>
> @@ -503,6 +540,13 @@<br>
><br>
> X265_CHECK(m_param->rdLevel >= 2, "compressInterCU_dist does not support RD 0 or 1\n");<br>
><br>
> + if (m_slice->m_pps->bUseDQP && depth && depth <= m_slice->m_pps->maxCuDQPDepth)<br>
> + {<br>
> + qp = m_qp[depth][partIdx];<br>
> + this->setQP(*m_slice, qp);<br>
> + qp = x265_clip3(QP_MIN, QP_MAX_SPEC, qp);<br>
> + }<br>
> +<br>
> if (mightNotSplit && depth >= minDepth)<br>
> {<br>
> int bTryAmp = m_slice->m_sps->maxAMPDepth > depth && (cuGeom.log2CUSize < 6 || m_param->rdLevel > 4);<br>
> @@ -511,28 +555,28 @@<br>
> PMODE pmode(*this, cuGeom);<br>
><br>
> /* Initialize all prediction CUs based on parentCTU */<br>
> - md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom);<br>
> - md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> + md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> if (bTryIntra)<br>
> {<br>
> - md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> if (cuGeom.log2CUSize == 3 && m_slice->m_sps->quadtreeTULog2MinSize < 3 && m_param->rdLevel >= 5)<br>
> - md.pred[PRED_INTRA_NxN].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA_NxN].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> pmode.modes[pmode.m_jobTotal++] = PRED_INTRA;<br>
> }<br>
> - md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_2Nx2N;<br>
> - md.pred[PRED_BIDIR].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_2Nx2N;<br>
> + md.pred[PRED_BIDIR].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> if (m_param->bEnableRectInter)<br>
> {<br>
> - md.pred[PRED_2NxN].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_2NxN;<br>
> - md.pred[PRED_Nx2N].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_Nx2N;<br>
> + md.pred[PRED_2NxN].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_2NxN;<br>
> + md.pred[PRED_Nx2N].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_Nx2N;<br>
> }<br>
> if (bTryAmp)<br>
> {<br>
> - md.pred[PRED_2NxnU].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_2NxnU;<br>
> - md.pred[PRED_2NxnD].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_2NxnD;<br>
> - md.pred[PRED_nLx2N].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_nLx2N;<br>
> - md.pred[PRED_nRx2N].cu.initSubCU(parentCTU, cuGeom); pmode.modes[pmode.m_jobTotal++] = PRED_nRx2N;<br>
> + md.pred[PRED_2NxnU].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_2NxnU;<br>
> + md.pred[PRED_2NxnD].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_2NxnD;<br>
> + md.pred[PRED_nLx2N].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_nLx2N;<br>
> + md.pred[PRED_nRx2N].cu.initSubCU(parentCTU, cuGeom, qp); pmode.modes[pmode.m_jobTotal++] = PRED_nRx2N;<br>
> }<br>
><br>
> pmode.tryBondPeers(*m_frame->m_encData->m_jobProvider, pmode.m_jobTotal);<br>
> @@ -662,7 +706,7 @@<br>
><br>
> if (md.bestMode->rdCost == MAX_INT64 && !bTryIntra)<br>
> {<br>
> - md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkIntraInInter(md.pred[PRED_INTRA], cuGeom);<br>
> encodeIntraInInter(md.pred[PRED_INTRA], cuGeom);<br>
> checkBestMode(md.pred[PRED_INTRA], depth);<br>
> @@ -688,7 +732,7 @@<br>
> Mode* splitPred = &md.pred[PRED_SPLIT];<br>
> splitPred->initCosts();<br>
> CUData* splitCU = &splitPred->cu;<br>
> - splitCU->initSubCU(parentCTU, cuGeom);<br>
> + splitCU->initSubCU(parentCTU, cuGeom, qp);<br>
><br>
> uint32_t nextDepth = depth + 1;<br>
> ModeDepth& nd = m_modeDepth[nextDepth];<br>
> @@ -702,7 +746,7 @@<br>
> {<br>
> m_modeDepth[0].fencYuv.copyPartToYuv(nd.fencYuv, childGeom.absPartIdx);<br>
> m_rqt[nextDepth].cur.load(*nextContext);<br>
> - compressInterCU_dist(parentCTU, childGeom);<br>
> + compressInterCU_dist(parentCTU, childGeom, qp, partIdx * 4 + subPartIdx);<br>
><br>
> // Save best CU and pred data for this sub CU<br>
> splitCU->copyPartFrom(nd.bestMode->cu, childGeom, subPartIdx);<br>
> @@ -741,7 +785,7 @@<br>
> md.bestMode->reconYuv.copyToPicYuv(*m_frame->m_reconPic, cuAddr, cuGeom.absPartIdx);<br>
> }<br>
><br>
> -void Analysis::compressInterCU_rd0_4(const CUData& parentCTU, const CUGeom& cuGeom)<br>
> +void Analysis::compressInterCU_rd0_4(const CUData& parentCTU, const CUGeom& cuGeom, int32_t qp, uint32_t partIdx)<br>
> {<br>
> uint32_t depth = cuGeom.depth;<br>
> uint32_t cuAddr = parentCTU.m_cuAddr;<br>
> @@ -752,13 +796,20 @@<br>
> bool mightNotSplit = !(cuGeom.flags & CUGeom::SPLIT_MANDATORY);<br>
> uint32_t minDepth = topSkipMinDepth(parentCTU, cuGeom);<br>
><br>
> + if (m_slice->m_pps->bUseDQP && depth && depth <= m_slice->m_pps->maxCuDQPDepth)<br>
> + {<br>
> + qp = m_qp[depth][partIdx];<br>
> + this->setQP(*m_slice, qp);<br>
> + qp = x265_clip3(QP_MIN, QP_MAX_SPEC, qp);<br>
> + }<br>
> +<br>
> if (mightNotSplit && depth >= minDepth)<br>
> {<br>
> bool bTryIntra = m_slice->m_sliceType != B_SLICE || m_param->bIntraInBFrames;<br>
><br>
> /* Compute Merge Cost */<br>
> - md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom);<br>
> - md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> + md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkMerge2Nx2N_rd0_4(md.pred[PRED_SKIP], md.pred[PRED_MERGE], cuGeom);<br>
><br>
> bool earlyskip = false;<br>
> @@ -767,24 +818,24 @@<br>
><br>
> if (!earlyskip)<br>
> {<br>
> - md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_2Nx2N], cuGeom, SIZE_2Nx2N);<br>
><br>
> if (m_slice->m_sliceType == B_SLICE)<br>
> {<br>
> - md.pred[PRED_BIDIR].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_BIDIR].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkBidir2Nx2N(md.pred[PRED_2Nx2N], md.pred[PRED_BIDIR], cuGeom);<br>
> }<br>
><br>
> Mode *bestInter = &md.pred[PRED_2Nx2N];<br>
> if (m_param->bEnableRectInter)<br>
> {<br>
> - md.pred[PRED_Nx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_Nx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_Nx2N], cuGeom, SIZE_Nx2N);<br>
> if (md.pred[PRED_Nx2N].sa8dCost < bestInter->sa8dCost)<br>
> bestInter = &md.pred[PRED_Nx2N];<br>
><br>
> - md.pred[PRED_2NxN].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2NxN].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_2NxN], cuGeom, SIZE_2NxN);<br>
> if (md.pred[PRED_2NxN].sa8dCost < bestInter->sa8dCost)<br>
> bestInter = &md.pred[PRED_2NxN];<br>
> @@ -806,24 +857,24 @@<br>
><br>
> if (bHor)<br>
> {<br>
> - md.pred[PRED_2NxnU].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2NxnU].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_2NxnU], cuGeom, SIZE_2NxnU);<br>
> if (md.pred[PRED_2NxnU].sa8dCost < bestInter->sa8dCost)<br>
> bestInter = &md.pred[PRED_2NxnU];<br>
><br>
> - md.pred[PRED_2NxnD].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2NxnD].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_2NxnD], cuGeom, SIZE_2NxnD);<br>
> if (md.pred[PRED_2NxnD].sa8dCost < bestInter->sa8dCost)<br>
> bestInter = &md.pred[PRED_2NxnD];<br>
> }<br>
> if (bVer)<br>
> {<br>
> - md.pred[PRED_nLx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_nLx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_nLx2N], cuGeom, SIZE_nLx2N);<br>
> if (md.pred[PRED_nLx2N].sa8dCost < bestInter->sa8dCost)<br>
> bestInter = &md.pred[PRED_nLx2N];<br>
><br>
> - md.pred[PRED_nRx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_nRx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd0_4(md.pred[PRED_nRx2N], cuGeom, SIZE_nRx2N);<br>
> if (md.pred[PRED_nRx2N].sa8dCost < bestInter->sa8dCost)<br>
> bestInter = &md.pred[PRED_nRx2N];<br>
> @@ -855,7 +906,7 @@<br>
> if ((bTryIntra && md.bestMode->cu.getQtRootCbf(0)) ||<br>
> md.bestMode->sa8dCost == MAX_INT64)<br>
> {<br>
> - md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkIntraInInter(md.pred[PRED_INTRA], cuGeom);<br>
> encodeIntraInInter(md.pred[PRED_INTRA], cuGeom);<br>
> checkBestMode(md.pred[PRED_INTRA], depth);<br>
> @@ -873,7 +924,7 @@<br>
><br>
> if (bTryIntra || md.bestMode->sa8dCost == MAX_INT64)<br>
> {<br>
> - md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkIntraInInter(md.pred[PRED_INTRA], cuGeom);<br>
> if (md.pred[PRED_INTRA].sa8dCost < md.bestMode->sa8dCost)<br>
> md.bestMode = &md.pred[PRED_INTRA];<br>
> @@ -960,7 +1011,7 @@<br>
> Mode* splitPred = &md.pred[PRED_SPLIT];<br>
> splitPred->initCosts();<br>
> CUData* splitCU = &splitPred->cu;<br>
> - splitCU->initSubCU(parentCTU, cuGeom);<br>
> + splitCU->initSubCU(parentCTU, cuGeom, qp);<br>
><br>
> uint32_t nextDepth = depth + 1;<br>
> ModeDepth& nd = m_modeDepth[nextDepth];<br>
> @@ -974,7 +1025,7 @@<br>
> {<br>
> m_modeDepth[0].fencYuv.copyPartToYuv(nd.fencYuv, childGeom.absPartIdx);<br>
> m_rqt[nextDepth].cur.load(*nextContext);<br>
> - compressInterCU_rd0_4(parentCTU, childGeom);<br>
> + compressInterCU_rd0_4(parentCTU, childGeom, qp, partIdx * 4 + subPartIdx);<br>
><br>
> // Save best CU and pred data for this sub CU<br>
> splitCU->copyPartFrom(nd.bestMode->cu, childGeom, subPartIdx);<br>
> @@ -1025,7 +1076,7 @@<br>
> md.bestMode->reconYuv.copyToPicYuv(*m_frame->m_reconPic, cuAddr, cuGeom.absPartIdx);<br>
> }<br>
><br>
> -void Analysis::compressInterCU_rd5_6(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t &zOrder)<br>
> +void Analysis::compressInterCU_rd5_6(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t &zOrder, int32_t qp, uint32_t partIdx)<br>
> {<br>
> uint32_t depth = cuGeom.depth;<br>
> ModeDepth& md = m_modeDepth[depth];<br>
> @@ -1034,14 +1085,21 @@<br>
> bool mightSplit = !(cuGeom.flags & CUGeom::LEAF);<br>
> bool mightNotSplit = !(cuGeom.flags & CUGeom::SPLIT_MANDATORY);<br>
><br>
> + if (m_slice->m_pps->bUseDQP && depth && depth <= m_slice->m_pps->maxCuDQPDepth)<br>
> + {<br>
> + qp = m_qp[depth][partIdx];<br>
> + this->setQP(*m_slice, qp);<br>
> + qp = x265_clip3(QP_MIN, QP_MAX_SPEC, qp);<br>
> + }<br>
> +<br>
> if (m_param->analysisMode == X265_ANALYSIS_LOAD)<br>
> {<br>
> uint8_t* reuseDepth = &m_reuseInterDataCTU->depth[parentCTU.m_cuAddr * parentCTU.m_numPartitions];<br>
> uint8_t* reuseModes = &m_reuseInterDataCTU->modes[parentCTU.m_cuAddr * parentCTU.m_numPartitions];<br>
> if (mightNotSplit && depth == reuseDepth[zOrder] && zOrder == cuGeom.absPartIdx && reuseModes[zOrder] == MODE_SKIP)<br>
> {<br>
> - md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom);<br>
> - md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> + md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkMerge2Nx2N_rd5_6(md.pred[PRED_SKIP], md.pred[PRED_MERGE], cuGeom, true);<br>
><br>
> if (m_bTryLossless)<br>
> @@ -1060,20 +1118,20 @@<br>
><br>
> if (mightNotSplit)<br>
> {<br>
> - md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom);<br>
> - md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_SKIP].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> + md.pred[PRED_MERGE].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkMerge2Nx2N_rd5_6(md.pred[PRED_SKIP], md.pred[PRED_MERGE], cuGeom, false);<br>
> bool earlySkip = m_param->bEnableEarlySkip && md.bestMode && !md.bestMode->cu.getQtRootCbf(0);<br>
><br>
> if (!earlySkip)<br>
> {<br>
> - md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2Nx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_2Nx2N], cuGeom, SIZE_2Nx2N, false);<br>
> checkBestMode(md.pred[PRED_2Nx2N], cuGeom.depth);<br>
><br>
> if (m_slice->m_sliceType == B_SLICE)<br>
> {<br>
> - md.pred[PRED_BIDIR].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_BIDIR].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkBidir2Nx2N(md.pred[PRED_2Nx2N], md.pred[PRED_BIDIR], cuGeom);<br>
> if (md.pred[PRED_BIDIR].sa8dCost < MAX_INT64)<br>
> {<br>
> @@ -1084,11 +1142,11 @@<br>
><br>
> if (m_param->bEnableRectInter)<br>
> {<br>
> - md.pred[PRED_Nx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_Nx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_Nx2N], cuGeom, SIZE_Nx2N, false);<br>
> checkBestMode(md.pred[PRED_Nx2N], cuGeom.depth);<br>
><br>
> - md.pred[PRED_2NxN].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2NxN].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_2NxN], cuGeom, SIZE_2NxN, false);<br>
> checkBestMode(md.pred[PRED_2NxN], cuGeom.depth);<br>
> }<br>
> @@ -1111,21 +1169,21 @@<br>
><br>
> if (bHor)<br>
> {<br>
> - md.pred[PRED_2NxnU].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2NxnU].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_2NxnU], cuGeom, SIZE_2NxnU, bMergeOnly);<br>
> checkBestMode(md.pred[PRED_2NxnU], cuGeom.depth);<br>
><br>
> - md.pred[PRED_2NxnD].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_2NxnD].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_2NxnD], cuGeom, SIZE_2NxnD, bMergeOnly);<br>
> checkBestMode(md.pred[PRED_2NxnD], cuGeom.depth);<br>
> }<br>
> if (bVer)<br>
> {<br>
> - md.pred[PRED_nLx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_nLx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_nLx2N], cuGeom, SIZE_nLx2N, bMergeOnly);<br>
> checkBestMode(md.pred[PRED_nLx2N], cuGeom.depth);<br>
><br>
> - md.pred[PRED_nRx2N].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_nRx2N].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkInter_rd5_6(md.pred[PRED_nRx2N], cuGeom, SIZE_nRx2N, bMergeOnly);<br>
> checkBestMode(md.pred[PRED_nRx2N], cuGeom.depth);<br>
> }<br>
> @@ -1133,13 +1191,13 @@<br>
><br>
> if (m_slice->m_sliceType != B_SLICE || m_param->bIntraInBFrames)<br>
> {<br>
> - md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkIntra(md.pred[PRED_INTRA], cuGeom, SIZE_2Nx2N, NULL, NULL);<br>
> checkBestMode(md.pred[PRED_INTRA], depth);<br>
><br>
> if (cuGeom.log2CUSize == 3 && m_slice->m_sps->quadtreeTULog2MinSize < 3)<br>
> {<br>
> - md.pred[PRED_INTRA_NxN].cu.initSubCU(parentCTU, cuGeom);<br>
> + md.pred[PRED_INTRA_NxN].cu.initSubCU(parentCTU, cuGeom, qp);<br>
> checkIntra(md.pred[PRED_INTRA_NxN], cuGeom, SIZE_NxN, NULL, NULL);<br>
> checkBestMode(md.pred[PRED_INTRA_NxN], depth);<br>
> }<br>
> @@ -1159,7 +1217,7 @@<br>
> Mode* splitPred = &md.pred[PRED_SPLIT];<br>
> splitPred->initCosts();<br>
> CUData* splitCU = &splitPred->cu;<br>
> - splitCU->initSubCU(parentCTU, cuGeom);<br>
> + splitCU->initSubCU(parentCTU, cuGeom, qp);<br>
><br>
> uint32_t nextDepth = depth + 1;<br>
> ModeDepth& nd = m_modeDepth[nextDepth];<br>
> @@ -1173,7 +1231,7 @@<br>
> {<br>
> m_modeDepth[0].fencYuv.copyPartToYuv(nd.fencYuv, childGeom.absPartIdx);<br>
> m_rqt[nextDepth].cur.load(*nextContext);<br>
> - compressInterCU_rd5_6(parentCTU, childGeom, zOrder);<br>
> + compressInterCU_rd5_6(parentCTU, childGeom, zOrder, qp, partIdx * 4 + subPartIdx);<br>
><br>
> // Save best CU and pred data for this sub CU<br>
> splitCU->copyPartFrom(nd.bestMode->cu, childGeom, subPartIdx);<br>
> @@ -1913,7 +1971,7 @@<br>
> return false;<br>
> }<br>
><br>
> -int Analysis::calculateQpforCuSize(CUData& ctu, const CUGeom& cuGeom)<br>
> +int Analysis::calculateQpforCuSize(const CUData& ctu, const CUGeom& cuGeom)<br>
> {<br>
> uint32_t ctuAddr = ctu.m_cuAddr;<br>
> FrameData& curEncData = *m_frame->m_encData;<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/encoder/analysis.h<br>
> --- a/source/encoder/analysis.h Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/encoder/analysis.h Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -90,6 +90,7 @@<br>
> void processPmode(PMODE& pmode, Analysis& slave);<br>
><br>
> ModeDepth m_modeDepth[NUM_CU_DEPTH];<br>
> + int* m_qp[NUM_CU_DEPTH];<br>
> bool m_bTryLossless;<br>
> bool m_bChromaSa8d;<br>
><br>
> @@ -109,12 +110,12 @@<br>
> uint32_t* m_reuseBestMergeCand;<br>
><br>
> /* full analysis for an I-slice CU */<br>
> - void compressIntraCU(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t &zOrder);<br>
> + void compressIntraCU(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t &zOrder, int32_t qpDepth, uint32_t partIdx);<br>
><br>
> /* full analysis for a P or B slice CU */<br>
> - void compressInterCU_dist(const CUData& parentCTU, const CUGeom& cuGeom);<br>
> - void compressInterCU_rd0_4(const CUData& parentCTU, const CUGeom& cuGeom);<br>
> - void compressInterCU_rd5_6(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t &zOrder);<br>
> + void compressInterCU_dist(const CUData& parentCTU, const CUGeom& cuGeom, int32_t qpDepth, uint32_t partIdx);<br>
> + void compressInterCU_rd0_4(const CUData& parentCTU, const CUGeom& cuGeom, int32_t qpDepth, uint32_t partIdx);<br>
> + void compressInterCU_rd5_6(const CUData& parentCTU, const CUGeom& cuGeom, uint32_t &zOrder, int32_t qpDepth, uint32_t partIdx);<br>
><br>
> /* measure merge and skip */<br>
> void checkMerge2Nx2N_rd0_4(Mode& skip, Mode& merge, const CUGeom& cuGeom);<br>
> @@ -139,7 +140,7 @@<br>
> /* generate residual and recon pixels for an entire CTU recursively (RD0) */<br>
> void encodeResidue(const CUData& parentCTU, const CUGeom& cuGeom);<br>
><br>
> - int calculateQpforCuSize(CUData& ctu, const CUGeom& cuGeom);<br>
> + int calculateQpforCuSize(const CUData& ctu, const CUGeom& cuGeom);<br>
><br>
> /* check whether current mode is the new best */<br>
> inline void checkBestMode(Mode& mode, uint32_t depth)<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/encoder/encoder.cpp<br>
> --- a/source/encoder/encoder.cpp Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/encoder/encoder.cpp Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -1557,15 +1557,12 @@<br>
> bool bIsVbv = m_param->rc.vbvBufferSize > 0 && m_param->rc.vbvMaxBitrate > 0;<br>
><br>
> if (!m_param->bLossless && (m_param->rc.aqMode || bIsVbv))<br>
> - {<br>
> pps->bUseDQP = true;<br>
> - pps->maxCuDQPDepth = 0; /* TODO: make configurable? */<br>
> - }<br>
> else<br>
> - {<br>
> pps->bUseDQP = false;<br>
> - pps->maxCuDQPDepth = 0;<br>
> - }<br>
> +<br>
> + pps->maxCuDQPDepth = g_log2Size[m_param->maxCUSize] - g_log2Size[m_param->rc.QGSize];<br>
> + X265_CHECK(pps->maxCuDQPDepth <= 2, "max CU DQP depth cannot be greater than 2");<br>
><br>
> pps->chromaQpOffset[0] = m_param->cbQpOffset;<br>
> pps->chromaQpOffset[1] = m_param->crQpOffset;<br>
> @@ -1788,6 +1785,22 @@<br>
> p->analysisMode = X265_ANALYSIS_OFF;<br>
> x265_log(p, X265_LOG_WARNING, "Analysis save and load mode not supported for distributed mode analysis\n");<br>
> }<br>
> +<br>
> + bool bIsVbv = m_param->rc.vbvBufferSize > 0 && m_param->rc.vbvMaxBitrate > 0;<br>
> + if (!m_param->bLossless && (m_param->rc.aqMode || bIsVbv))<br>
> + {<br>
> + if (p->rc.QGSize < X265_MAX(16, p->minCUSize))<br>
> + {<br>
> + p->rc.QGSize = X265_MAX(16, p->minCUSize);<br>
> + x265_log(p, X265_LOG_WARNING, "QGSize should be greater than or equal to 16 and minCUSize, setting QGSize = %d \n", p->rc.QGSize);<br>
<br>
</div></div>trailing white-space<br>
<div><div class="h5"><br>
> + }<br>
> +<br>
> + if (p->rc.QGSize > p->maxCUSize)<br>
> + {<br>
> + p->rc.QGSize = p->maxCUSize;<br>
> + x265_log(p, X265_LOG_WARNING, "QGSize should be less than or equal to maxCUSize, setting QGSize = %d \n", p->rc.QGSize);<br>
> + }<br>
> + }<br>
> }<br>
><br>
> void Encoder::allocAnalysis(x265_analysis_data* analysis)<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/encoder/frameencoder.cpp<br>
> --- a/source/encoder/frameencoder.cpp Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/encoder/frameencoder.cpp Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -852,9 +852,7 @@<br>
> if (m_param->rc.aqMode || bIsVbv)<br>
> {<br>
> int qp = calcQpForCu(cuAddr, curEncData.m_cuStat[cuAddr].baseQp);<br>
> - tld.analysis.setQP(*slice, qp);<br>
> qp = x265_clip3(QP_MIN, QP_MAX_SPEC, qp);<br>
> - ctu->setQPSubParts((int8_t)qp, 0, 0);<br>
> curEncData.m_rowStat[row].sumQpAq += qp;<br>
> }<br>
> else<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/x265.h<br>
> --- a/source/x265.h Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/x265.h Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -988,6 +988,12 @@<br>
> /* Enable stricter conditions to check bitrate deviations in CBR mode. May compromise<br>
> * quality to maintain bitrate adherence */<br>
> int bStrictCbr;<br>
> +<br>
> + /* Enable adaptive quantization at CU granularity. This parameter specifies<br>
> + * the minimum CU size at which QP can be adjusted, i.e. Quantization Group<br>
> + * (QG) size. Allowed values are 64, 32, 16 provided it falls within the<br>
> + * inclusuve range [maxCUSize, minCUSize]. Experimental, default: maxCUSize*/<br>
> + uint32_t QGSize;<br>
<br>
</div></div>in our camelCase style this would be qgSize<br>
<span class=""><br>
> } rc;<br>
><br>
> /*== Video Usability Information ==*/<br>
> diff -r 335c728bbd62 -r d6e059bd8a9c source/x265cli.h<br>
> --- a/source/x265cli.h Fri Apr 03 14:27:32 2015 -0500<br>
> +++ b/source/x265cli.h Mon Mar 23 14:23:42 2015 +0530<br>
> @@ -205,6 +205,7 @@<br>
> { "strict-cbr", no_argument, NULL, 0 },<br>
> { "temporal-layers", no_argument, NULL, 0 },<br>
> { "no-temporal-layers", no_argument, NULL, 0 },<br>
> + { "qg-size", required_argument, NULL, 0 },<br>
<br>
</span>w/s<br>
<span class=""><br>
> { 0, 0, 0, 0 },<br>
> { 0, 0, 0, 0 },<br>
> { 0, 0, 0, 0 },<br>
> @@ -352,6 +353,7 @@<br>
> H0(" --analysis-file <filename> Specify file name used for either dumping or reading analysis data.\n");<br>
> H0(" --aq-mode <integer> Mode for Adaptive Quantization - 0:none 1:uniform AQ 2:auto variance. Default %d\n", param->rc.aqMode);<br>
> H0(" --aq-strength <float> Reduces blocking and blurring in flat and textured areas (0 to 3.0). Default %.2f\n", param->rc.aqStrength);<br>
> + H0(" --qg-size <float> Specifies the size of the quantization group (64, 32, 16). Default %d\n", param->rc.QGSize);<br>
<br>
</span>float? alignment<br>
<span class=""><br>
> H0(" --[no-]cutree Enable cutree for Adaptive Quantization. Default %s\n", OPT(param->rc.cuTree));<br>
> H1(" --ipratio <float> QP factor between I and P. Default %.2f\n", param->rc.ipFactor);<br>
> H1(" --pbratio <float> QP factor between P and B. Default %.2f\n", param->rc.pbFactor);<br>
<br>
</span>docs, X265_BUILD<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Steve Borho<br>
<br>
_______________________________________________<br>
x265-devel mailing list<br>
<a href="mailto:x265-devel@videolan.org">x265-devel@videolan.org</a><br>
<a href="https://mailman.videolan.org/listinfo/x265-devel" target="_blank">https://mailman.videolan.org/listinfo/x265-devel</a><br>
<br>
</font></span></blockquote></div><br></div>