[x265] Question about NUMA and core/thread use

Michael Lackner michael.lackner at unileoben.ac.at
Thu May 18 10:14:45 CEST 2017


Hello again,

1.) I have another question about x265s' NUMA support. After looking at the behavior of
--pools and at the documentation, it became clear that x265 doesn't actually create
individual thread pools for each NUMA node automatically.

So on my 2-socket, 32-thread server, I just did it manually with '--pools="16,16"', and
now it spawns two pools with 16 threads each instead of one pool with 32 threads.

Problem: The overall performance drops by ~15%, and when looking at the behavior, one of
the two pools would often drop to 0-10% CPU usage for a few seconds, then start loading
the CPU again, then drop again for several seconds, etc. It seems as if the second pool
(and *only* the second pool, it doesn't happen for the first) needs to wait for something.

This doesn't happen when spawning just one pool with 32 threads (x265s' default behavior).
Load is near-maximum all the time, encode is faster.

Other parallelization options, identical for all cases: '--wpp --pmode --pme --slices 4
--lookahead-slices 4 --ctu 16 --max-tu-size 16 --qg-size 16'.

Content is 8K again. Yeah, it works with '--pmode --pme' now, somehow, please don't ask me
why, I have no idea..

When searching the web I found this discussion, but it's pretty old:

http://x265-devel.videolan.narkive.com/sbQflm3D/x265-cpu-utilization-very-low-on-a-multi-numa-sockets-server

They are talking about looking at 'DecideWait (ms)'. I've enabled frame stats with '--csv
stats.txt --csv-log-level 2' to take look at potential issues.

Ran:
cat stats.txt | awk -F',' '{ print $33 "," $34 "," $36 "," $38 }' | sed 's/^\s//'

This extracts all the "Wait" and "Stall Time" values, maybe this is relevant? I've
attached the resulting files, one for a run with 2x16 NUMA thread pools and one for a run
with just 1x32 thread pool spanning all CPUs across both nodes for a total of 117 frames.


So actually using NUMA hurts performance? Should I not create thread pools for individual
NUMA nodes after all? Or should I specify certain options to make this perform better than
the flat topology/round-robin way of scheduling threads?


2.) Also, one more thing that's scaring me, the documentation for '--pools' says:

"[...] In the case that the total number of threads is more than the maximum size that
ATOMIC operations can handle (32 for 32-bit compiles, and 64 for 64-bit compiles),
multiple thread pools may be spawned subject to the performance constraint described
above. [...]"

What does "may be spawned" mean? Like "the user may do this by specifying --pools" or
"x265 may or may not do this for the user automatically"?!


Thanks for your time, and sorry if I'm asking something stupid! ;)


On 05/10/2017 09:37 AM, Michael Lackner wrote:
> On 05/10/2017 09:26 AM, Pradeep Ramachandran wrote:
>> On Wed, May 10, 2017 at 12:32 PM, Michael Lackner <
>> michael.lackner at unileoben.ac.at> wrote:
>>
>>> On 05/10/2017 08:24 AM, Pradeep Ramachandran wrote:
>>>> On Wed, May 10, 2017 at 11:12 AM, Michael Lackner <
>>>> michael.lackner at unileoben.ac.at> wrote:
>>>>
>>>>> Thank you very much for your input!
>>>>>
>>>>> Since --pmode and --pme seem to break NUMA support, I disabled them. I
>>>>> simply cannot tell
>>>>> users that they have to switch off NUMA in their UEFI firmware just for
>>>>> this one
>>>>> application. There may be a lot of situations where this is just not
>>>>> doable.
>>>>>
>>>>> If there is a way to make --pmode --pme work together with x265s' NUMA
>>>>> support, I'd use
>>>>> it, but I don't know how?
>>>>
>>>> Could you please elaborate more here? It seems to work ok for us here.
>>> I've
>>>> tried on CentOS and Win Server 2017 dual socket systems and I see all
>>>> sockets being used.
>>>
>>> It's like this: x265 does *say* it's using 32 threads in two NUMA pools.
>>> That's just how
>>> it should be. But it behaves very weirdly, almost never loading more than
>>> two logical
>>> cores. FPS are extremely low, so it's really slow.
>>>
>>> CPU load stays at 190-200%, sometimes briefly dropping to 140-150%, where
>>> it should be in
>>> the range of 2800-3200%. As soon as I remove --pmode --pme, the system is
>>> being loaded
>>> very well! It almost never drops below the 3000% (30 cores) mark then.
>>>
>>> I also works *with* --pmode --pme, but only if NUMA is disabled on the
>>> firmware level,
>>> showing only a classic, flat topology to the OS.
>>>
>>> That behavior can be seen on CentOS 7.3 Linux, having compiled x265 2.4+2
>>> with GCC 4.8.5
>>> and yasm 1.3.0. The machine is a HP ProLiant DL360 Gen9 machine with two
>>> Intel Xeon
>>> E5-2620 CPUs.
>>>
>>> Removing --pmode --pme was suggested by Mario *LigH* Rohkrämer earlier in
>>> this thread.
>>>
>>
>> This seems something specific with your configuration setup. I just tried
>> an identical experiment on two systems that I have which are dual-socket
>> E5-2699 v4s (88 threads spread across two sockets) running CentOS 6.8 and
>> CentOS 7.2. I compiled x265 with gcc version 4.4 and am able to see
>> utilization actually pick up closer to 5000% (monitored using htop) when
>> --pme and --pmode are enabled in the command line; without these options,
>> the utilization is closer to 3300%.
> 
> Hmm, crap. That would mean something's wrong with that HP server? But it's still very
> strange. I've been able to reproduce this every time, even after a fresh reboot. Even if
> it was a firmware bug, why would it trigger only when --pmode --pme are used...
> 
> Maybe I should try a newer Linux kernel and not the stock one CentOS 7.3 comes with?!
> 
> Any way to debug this and see what's going wrong? I have no idea what to do...
> 
>>> Here is my topology when NUMA is enabled (pretty simple):
>>>
>>> # numactl -H
>>> available: 2 nodes (0-1)
>>> node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
>>> node 0 size: 32638 MB
>>> node 0 free: 266 MB
>>> node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
>>> node 1 size: 32768 MB
>>> node 1 free: 82 MB
>>> node distances:
>>> node   0   1
>>>   0:  10  21
>>>   1:  21  10
>>>
>>> Thanks!
>>>
>>
>> You seem to have very little free memory in each node which might be making
>> you go to disk and therefore affecting performance. I recommend trying to
>> free some memory up before running x265 to see if that helps.
> 
> Nono, there was no swapping. It's just that the machine is currently in use transcoding 4K
> stuff, so a some memory is actively allocated, and the rest is filled up with file
> buffers, that's why numactl is showing this. See here:
> 
> # free
>               total        used        free      shared  buff/cache   available
> Mem:       65672976     7371344     5737328       17544    52564304    57687116
> Swap:      97224700           0    97224700
> 
> For my test runs, I've even rebooted the machine and ran the test immediately afterwards.
> There is nearly nothing on that machine, it's fresh. No servers, no X11, just the base
> system with ffmpeg and x265 cli.
> 
>>>>> Ah yes, I've also found that 8K does indeed help a ton. With 4K and
>>>>> similar settings, I'm
>>>>> able to load 16-25 CPUs currently, sometimes briefly 30. With 8K, load
>>> is
>>>>> much higher.
>>>>>
>>>>> Maybe you can advise how to maximize parallelization / loading as many
>>>>> CPUs as possible
>>>>> without breaking NUMA support on both Windows and Linux.
>>>>>
>>>>> I'm saying this, because my benchmarking project is targeting multiple
>>>>> operating systems,
>>>>> it currently works on:
>>>>>   * Windows NT 5.2 & 6.0 (wo. NUMA)
>>>>>   * Windows NT 6.1 - 10.0 (w. NUMA)
>>>>>   * MacOS X (wo. NUMA)
>>>>>   * Linux (w. and wo. NUMA)
>>>>>   * FreeBSD, OpenBSD, NetBSD and DragonFly BSD UNIX (wo. NUMA)
>>>>>   * Solaris (wo. NUMA)
>>>>>   * Haiku OS (wo. NUMA)
>>>>>
>>>>> Thank you very much!
>>>>>
>>>>> Best,
>>>>> Michael
>>>>>
>>>>> On 05/10/2017 07:21 AM, Pradeep Ramachandran wrote:
>>>>>> Michael,
>>>>>> Adding --lookahead-threads 2 statically allocated two threads for
>>>>>> lookahead. Therefore, the worker threads launched to work on WPP will
>>>>> 32-2
>>>>>> = 30 in count. We've found some situations in which statically
>>> allocating
>>>>>> threads for lookahead was useful and therefore decided to expose it to
>>>>> the
>>>>>> user. Please see if this helps your use-case and enable appropriately.
>>>>>>
>>>>>> Now as far as scaling up for 8K goes, a single instance of x265 scales
>>> up
>>>>>> well to 25-30 threads depending on the preset you're running in. We've
>>>>>> found pmode and pme help performance considerably on some Broadwell
>>>>> server
>>>>>> systems but again, that is also dependent on content. I would encourage
>>>>> you
>>>>>> play with those settings and see if they help your use case. Beyond
>>> these
>>>>>> thread counts, one instance of x265 may not be beneficial for you.
>>>>>>
>>>>>> Pradeep.
>>>>>>
>>>>>> On Fri, May 5, 2017 at 3:26 PM, Michael Lackner <
>>>>>> michael.lackner at unileoben.ac.at> wrote:
>>>>>>
>>>>>>> I found the reason for "why did x265 use 30 threads and not 32, when I
>>>>>>> have 32 CPUs".
>>>>>>>
>>>>>>> Actually, it was (once again) my own fault. Thinking I know better
>>> than
>>>>>>> x265, I spawned
>>>>>>> two lookahead threads starting with 32 logical CPUs
>>>>> ('--lookahead-threads
>>>>>>> 2').
>>>>>>>
>>>>>>> It seems what x265 does is to reserve two dedicated CPUs for this, but
>>>>>>> then it couldn't
>>>>>>> permanently saturate them.
>>>>>>>
>>>>>>> I still don't know when I should be starting with that stuff for 8K
>>>>>>> content. 64 CPUs? 256
>>>>>>> CPUs? Or should I leave everything to x265? My goal was to be able to
>>>>>>> fully load as many
>>>>>>> CPUs as possible in the future.
>>>>>>>
>>>>>>> In any case, the culprit was myself.
>>>>>>>
>>>>>>> On 05/04/2017 11:18 AM, Mario *LigH* Rohkrämer wrote:
>>>>>>>> Am 04.05.2017, 10:58 Uhr, schrieb Michael Lackner <
>>>>>>> michael.lackner at unileoben.ac.at>:
>>>>>>>>
>>>>>>>>> Still wondering why not 32, but ok.
>>>>>>>>
>>>>>>>> x265 will calculate how many threads it will really need to utilize
>>> the
>>>>>>> WPP and other
>>>>>>>> parallelizable steps, in relation to the frame dimensions and the
>>>>>>> complexity. It may not
>>>>>>>> *need* more than 30 threads, would not have any task to give to two
>>>>>>> more. Possibly.
>>>>>>>> Developers know better...
>>>>>>>
>>>>>>> --
>>>>>>> Michael Lackner
>>>>>>> Lehrstuhl für Informationstechnologie (CiT)
>>>>>>> Montanuniversität Leoben
>>>>>>> Tel.: +43 (0)3842/402-1505 | Mail: michael.lackner at unileoben.ac.at
>>>>>>> Fax.: +43 (0)3842/402-1502 | Web: http://institute.unileoben.ac.
>>>>>>> at/infotech
>>>>>>> _______________________________________________
>>>>>>> x265-devel mailing list
>>>>>>> x265-devel at videolan.org
>>>>>>> https://mailman.videolan.org/listinfo/x265-devel
>>>>>
>>>>> --
>>>>> Michael Lackner
>>>>> Lehrstuhl für Informationstechnologie (CiT)
>>>>> Montanuniversität Leoben
>>>>> Tel.: +43 (0)3842/402-1505 | Mail: michael.lackner at unileoben.ac.at
>>>>> Fax.: +43 (0)3842/402-1502 | Web: http://institute.unileoben.ac.
>>>>> at/infotech
>>>>> _______________________________________________
>>>>> x265-devel mailing list
>>>>> x265-devel at videolan.org
>>>>> https://mailman.videolan.org/listinfo/x265-devel
>>>>>
>>>>>
>>>>>
>>>>> N �n�r����)em�h�yhiם�w^��
>>>
>>> --
>>> Michael Lackner
>>> Lehrstuhl für Informationstechnologie (CiT)
>>> Montanuniversität Leoben
>>> Tel.: +43 (0)3842/402-1505 | Mail: michael.lackner at unileoben.ac.at
>>> Fax.: +43 (0)3842/402-1502 | Web: http://institute.unileoben.ac.
>>> at/infotech
>>> _______________________________________________
>>> x265-devel mailing list
>>> x265-devel at videolan.org
>>> https://mailman.videolan.org/listinfo/x265-devel
>>>
>>>
>>>
>>> N�n�r����)em�h�yhiם�w^��


-- 
Michael Lackner
Lehrstuhl für Informationstechnologie (CiT)
Montanuniversität Leoben
Tel.: +43 (0)3842/402-1505 | Mail: michael.lackner at unileoben.ac.at
Fax.: +43 (0)3842/402-1502 | Web: http://institute.unileoben.ac.at/infotech
-------------- next part --------------
DecideWait (ms), Row0Wait (ms), Ref Wait Wall (ms), Stall Time (ms)
1722.2, 0.6, 0.8, 0.0
2026.2, 3301.8, 0.6, 0.0
2224.3, 7351.4, 2263.3, 1934.4
2487.7, 12691.3, 8032.9, 83251.6
5594.4, 10236.2, 7381.2, 58.6
5814.4, 11783.5, 6727.0, 7827.1
6026.0, 23209.0, 12818.9, 10595.8
9083.9, 20384.5, 12585.6, 589.7
9263.6, 22826.7, 11283.8, 269.9
9439.1, 25263.9, 10557.7, 12771.1
12878.4, 41318.4, 0.2, 0.0
13059.8, 47372.1, 1724.3, 0.0
13254.8, 53665.9, 1870.4, 14459.0
3095.0, 1.2, 0.4, 0.0
97.7, 7492.5, 2062.8, 343.1
96.6, 7349.0, 2065.0, 0.0
360.1, 0.5, 2.9, 0.0
4.9, 8865.4, 2707.2, 6353.2
93.7, 9058.3, 18722.8, 118006.5
92.2, 1.1, 18541.0, 688.7
5.4, 1730.6, 18393.0, 1198.5
89.5, 2584.6, 18075.4, 27654.9
4.9, 1.0, 0.2, 0.0
15.3, 9452.2, 2900.1, 6.4
4.8, 9541.6, 2052.6, 21466.6
2659.6, 1.0, 0.2, 0.0
5.1, 9555.2, 3038.5, 1495.1
3.2, 10102.8, 2060.8, 20517.0
2850.0, 560.5, 0.2, 0.0
5.1, 9333.2, 2854.4, 0.0
2.9, 9681.2, 2034.1, 0.0
3.1, 0.4, 1.3, 0.0
5.7, 8997.6, 3093.7, 4560.1
3.9, 9410.6, 12732.0, 20922.1
4.4, 1.1, 12019.4, 0.0
5.2, 1703.4, 12060.9, 2184.7
3.3, 5139.9, 2706.3, 20294.8
4.7, 1.0, 0.2, 0.0
8.8, 8844.6, 3124.2, 104.5
4.6, 9602.2, 2196.9, 18850.0
2792.1, 1.1, 0.2, 0.0
5.0, 8481.9, 2676.9, 9.6
3.2, 9165.1, 2266.4, 106439.1
2822.9, 466.7, 0.1, 0.0
6.8, 8815.9, 2837.8, 6558.0
2.8, 4927.5, 18965.7, 16929.7
8.7, 581.8, 18428.1, 0.0
5.6, 1571.3, 18011.9, 0.0
11.7, 3866.0, 15981.4, 17638.8
3113.8, 618.9, 0.1, 0.0
8.7, 7989.2, 2615.0, 0.0
4.1, 6738.6, 2161.0, 18244.7
2759.0, 2.3, 0.2, 0.0
5.7, 7772.8, 2622.5, 785.9
3.3, 6486.9, 2468.0, 0.0
265.2, 0.6, 3.0, 0.0
6.5, 7854.6, 2869.4, 229.7
12.0, 6292.1, 2605.2, 113968.7
4.2, 576.7, 0.2, 0.0
4.9, 7377.6, 3134.8, 0.0
11.2, 6467.4, 2317.5, 17231.5
6.6, 4.2, 0.2, 0.0
5.6, 6521.4, 4065.5, 0.0
3.1, 7878.4, 2204.6, 18805.2
2664.9, 515.9, 0.2, 0.0
5.9, 7859.1, 2980.6, 0.0
3.4, 7964.1, 1981.2, 17702.7
2695.8, 1.0, 0.2, 0.0
16.9, 8415.8, 2653.2, 395.1
3.0, 7337.9, 1957.4, 0.0
190.0, 0.6, 1.9, 0.0
4.7, 10472.7, 2939.7, 488.0
2.9, 8369.9, 2660.7, 22232.8
4.2, 459.2, 0.1, 0.0
5.2, 10066.2, 3117.2, 1198.7
4.2, 8563.8, 2467.0, 43457.0
2904.2, 615.3, 0.2, 0.0
5.4, 9727.6, 3737.4, 0.0
3.5, 8487.7, 2564.3, 23897.0
4.8, 604.6, 0.2, 0.0
5.9, 9756.7, 3390.6, 380.7
3.2, 8285.4, 2653.1, 132773.6
2389.8, 600.9, 0.2, 0.0
9.4, 9735.0, 3011.3, 517.2
3.4, 8453.8, 2338.5, 23665.1
2828.0, 600.6, 0.2, 0.0
4.8, 10284.9, 3581.8, 101.1
3.4, 8163.9, 2404.8, 19634.5
3249.1, 684.8, 0.2, 0.0
9.9, 10161.7, 3526.1, 38.0
3.2, 8445.7, 2304.0, 21986.5
2906.4, 581.3, 0.1, 0.0
8.9, 9870.1, 3306.1, 953.9
7.4, 8063.5, 2559.5, 0.0
77.7, 414.5, 1.8, 0.0
12.3, 9299.4, 3239.3, 401.8
6.5, 8615.7, 2395.2, 117798.9
6.2, 1.0, 34.2, 0.0
12.7, 9136.8, 2719.5, 0.0
5.2, 11219.3, 2961.5, 931.0
11.5, 8940.8, 2430.9, 23958.4
3048.2, 1.1, 0.3, 0.0
10.5, 8808.6, 5078.7, 0.0
8.1, 9274.3, 2278.4, 13911.9
2476.4, 5.2, 0.0, 0.0
1730.3, 1.0, 775.7, 0.0
10.2, 5330.0, 1644.8, 662.9
12.4, 5099.5, 1935.0, 9297.0
3129.3, 627.7, 0.1, 0.0
3.5, 389.3, 2.1, 0.0
2.9, 6070.6, 2019.3, 21278.3
8.8, 1.0, 0.2, 0.0
5.5, 9047.0, 2944.1, 0.0
7.4, 8707.3, 1844.9, 20570.2
2716.2, 4.7, 0.2, 0.0
4.5, 9643.3, 2992.1, 588.8
,,,

-------------- next part --------------
DecideWait (ms), Row0Wait (ms), Ref Wait Wall (ms), Stall Time (ms)
1564.0, 0.8, 1.4, 0.0
1785.6, 6276.8, 2.1, 0.0
1976.9, 14696.8, 2359.2, 19811.6
2140.9, 22631.9, 23508.5, 85077.8
2888.2, 21884.8, 23508.4, 8676.2
3079.4, 24395.9, 22594.7, 15199.8
3249.4, 28430.7, 30475.2, 33136.8
3882.5, 28459.6, 29813.0, 17427.6
4071.3, 32628.3, 30035.1, 18652.2
4226.4, 35403.8, 38478.5, 15284.4
4884.8, 43535.8, 29688.1, 9945.0
5053.2, 49345.9, 26157.9, 9271.3
5210.8, 52735.0, 26127.0, 0.0
97.5, 1.7, 1.1, 0.0
95.4, 14929.6, 3883.3, 12693.4
99.9, 16477.7, 3173.1, 8195.4
89.3, 0.3, 0.7, 0.0
8.9, 18026.3, 4778.9, 15664.8
73.5, 19696.5, 3483.4, 31547.8
99.6, 3845.2, 3483.4, 3373.4
5.7, 22575.6, 5448.6, 15297.1
76.2, 20977.9, 3714.9, 13363.6
4.6, 0.9, 1.0, 0.0
5.3, 18918.9, 4962.1, 29473.9
3.3, 21852.7, 26713.2, 40524.1
3.0, 4572.8, 26713.1, 3592.2
6.8, 8394.0, 25866.0, 727.6
3.2, 5579.1, 24782.4, 17050.2
4.9, 524.6, 0.2, 0.0
5.5, 7275.3, 5528.3, 32386.0
2.4, 21655.0, 55073.1, 34886.1
3.6, 4643.5, 55073.1, 9626.6
5.1, 7584.7, 54865.5, 15450.9
3.0, 7105.2, 56794.3, 51829.5
4.0, 1014.0, 56794.3, 8336.8
5.5, 8462.6, 51354.7, 36647.8
2.8, 9746.8, 68175.8, 55850.7
4.6, 2.5, 22299.8, 0.0
9.8, 12322.5, 12494.1, 20071.9
2.8, 18995.7, 24854.3, 30809.5
3.2, 2106.2, 24854.2, 1087.0
6.7, 5733.0, 23199.0, 291.1
2.9, 6027.0, 21319.6, 0.0
4.9, 428.5, 0.4, 0.0
6.3, 17122.9, 4964.9, 14460.6
2.9, 19911.7, 4542.2, 3125.0
4.0, 375.2, 1.4, 0.0
7.6, 16914.5, 4595.4, 24473.4
3.1, 20003.2, 23573.6, 36303.3
3.6, 4679.2, 23573.6, 3133.2
9.8, 1409.2, 22616.3, 14762.3
7.4, 1210.0, 42017.0, 42429.2
4.7, 1.0, 23916.1, 1135.6
9.4, 10296.5, 15459.4, 7687.4
3.9, 6796.8, 11070.3, 0.0
4.6, 0.7, 0.2, 0.0
9.7, 16494.4, 5686.4, 13768.9
7.0, 12430.6, 4312.7, 27123.8
4.5, 534.8, 1915.0, 0.0
5.2, 16106.9, 5675.3, 14880.1
2.8, 12679.6, 3887.0, 10300.2
9.4, 1.0, 2.2, 0.0
5.2, 15747.4, 4993.4, 28506.3
6.3, 12893.9, 24441.5, 50337.8
5.0, 624.3, 22224.8, 0.0
15.6, 7875.8, 16152.7, 16768.0
10.4, 7081.5, 31187.3, 10094.1
3.6, 1.0, 26534.1, 1121.7
17.7, 6379.7, 22602.4, 3779.8
7.3, 2770.5, 22644.1, 0.0
4.4, 0.7, 1.0, 0.0
7.0, 21560.0, 6242.0, 20983.7
3.3, 16474.6, 29914.6, 61314.0
4.4, 520.8, 27351.0, 0.0
9.7, 9550.6, 20449.0, 23543.9
2.9, 6511.8, 38374.7, 14967.2
4.3, 694.0, 30299.7, 2654.3
5.2, 10844.9, 21471.8, 8637.6
2.8, 6514.9, 19389.7, 0.0
4.5, 489.3, 0.6, 0.0
5.5, 19525.1, 5382.3, 52095.7
5.2, 14882.9, 46802.2, 127389.4
4.5, 510.2, 44468.5, 16702.3
8.6, 8036.2, 37978.4, 40222.6
6.6, 5636.7, 53867.3, 53180.0
5.5, 499.8, 46966.9, 24574.0
5.5, 14268.1, 38887.3, 36449.9
9.4, 8444.7, 47765.0, 55312.3
6.0, 594.7, 45075.7, 12069.2
5.3, 8207.3, 39196.0, 36265.1
6.5, 14381.5, 51237.3, 39251.0
3165.7, 640.8, 28959.9, 1262.5
5.1, 12746.7, 18253.7, 9161.4
4.6, 8115.7, 12392.5, 0.0
4.2, 483.5, 0.5, 0.0
8.6, 19657.7, 5369.6, 17167.1
2.8, 15182.7, 4438.0, 34681.4
4.3, 1.0, 2680.6, 0.0
5.1, 18878.2, 5558.8, 16901.7
5.2, 20814.1, 5609.7, 34138.6
2.2, 16203.9, 29728.6, 46086.8
5.4, 1.0, 27860.3, 0.0
5.5, 6838.9, 23976.6, 8108.7
2.3, 5504.6, 26595.1, 20814.8
4.9, 0.8, 0.0, 0.0
5.2, 3835.7, 1034.4, 1676.5
4.3, 10921.2, 3337.1, 5689.0
3.1, 10428.1, 3079.8, 0.0
1989.7, 426.1, 0.7, 0.0
3.9, 15991.8, 4032.5, 13557.7
6.7, 11952.2, 3694.7, 2435.6
4.2, 1.1, 1.8, 0.0
4.6, 18551.1, 5072.6, 30346.6
1.9, 15421.1, 28296.9, 54254.3
7.9, 1.0, 26329.6, 0.0
5.0, 7425.2, 21321.9, 22970.1
,,,



More information about the x265-devel mailing list