[x264-devel] Re: Bandwidth measurements - influence of different parameters

Davy De Winter davy.dewinter at telenet.be
Sat Jan 21 08:30:56 CET 2006


Jeff Clagg wrote:

>On Fri, Jan 20, 2006 at 12:27:08PM +0100, Davy De Winter wrote:
>
>  
>
>>- b-frames (varying from 1 to 10): generally can be concluded that 
>>b-frames give the most advantage up until 3 b-frames according to 
>>bandwidth. Delay is now also correctly ((num_of_bframes+1)*frame_read_time)
>>    
>>
>
>This might not be doing exactly what you think it's doing since b-frames
>are placed adaptively by default.
>
>  
>
First of all, thanks a lot for the answers. First question: is it 
possible to turn adaptive b-frames off. However, if we use B-frames, I 
can see nicely the encoder waits b_frames * frameread_time before 
starting to encode......

>>- cbr (varying from 20% af the amount of actually required bandwidth to 
>>100% of the required bandwidth in steps of 10%): as expected, nice 
>>results and almost no influence on bandwidth. One question here: is the 
>>bandwidth reached by varying the quantisation parameter, because I did 
>>not find any answer in previous posts how it is implemented (I'd like to 
>>read something 'bout it).
>>    
>>
>
>Yes, the requested bitrate is achieved by varying the quantizer. All
>encoders work this way.
>
>  
>
>>Are there also some other parameters according to the community which 
>>are very interesting to test, because we expect the other parameters 
>>only will have a minor influence (e.g. varying the macro-block size in 
>>I-, P-and B-frames). One important note: we only wanted to vary 
>>parameters which still give an acceptable (subjective!) quality. Of 
>>course, with only 20% of the required bitrate, quality is not 
>>acceptable, but in general, this is not our main concern, we focus here 
>>on bandwidth and delay.
>>
>>Any comments and/or advice very appreciated!
>>    
>>
>
>A general comment: I don't think you are familiar with general principles
>of video encoding (not complaining or anything, just saying this is my
>impression). I'm also not completely sure what you mean by bandwidth or
>delay. Related to the bandwidth issue: useful test of encoding parameters
>usually has to take account of both bitrate and distortion in order to
>tell you anything meaningful. Simple example that should help make this
>apparent: suppose option1 decreases bitrate (at fixed quantizer) by 5%,
>and option2 also decreases bitrate by 5%. Option1 gives slightly
>worse-looking output while option2 gives slightly better looking output.
>See why you can't look at bitrate only? Most tests either use fixed
>quantizers, and report BOTH rate and distortion (usually PSNR), or they
>use 2-pass on all encodes at the same bitrates, and report only PSNR.
>
>  
>

I know in real video-encoder comparison-scenarios, they always take an 
objective (and / or subjective) quality measurement into account, next 
to bitrate. However, our purpose is not "really" to give an evaluation 
of a videocodec, this has already been done numerous times and earlier 
tests (as referenced on your website) and the reason we've chosen the 
x264-codec is that it is currently the  best avaible codec. We use this 
codec to stream a desktop in different scenario's to a 
thinclient-device. What is important for us: the total end-to-end delay 
(a relative comparison between all parameters) to garantuee good 
user-interaction, and how we can influence the bandwidth most with a 
trade-off between bandwidth / delay without a serious degradation in 
quality, to say even more: if quality drops 5%, but it isn't noticeable, 
we do not care actually (this might sound strange :-)). But I give just 
this explanation to make my point clair. So it isn't an evaluation of 
this codec in sé, en that's the reason why we don not measure quality. 
(it would be difficult to do that in first place). It's actually 
important for us which parameters have in which scenario most influence 
(e.g. a large gopsize-influence in low-motion sequences such as 
word-processing).

>A few other comments, you might want to try the different ME algorithms
>(dia, hex, umh, esa). Hex is the default and iirc isn't affected by
>me_range. UMH is possibly a more useful compromise between speed and
>quality, and it can be tuned with me_range. ESA is too slow for
>practical use.
>  
>
These have been tested, but the effect is only minor.

>You might also consider trying different numbers of reference frames;
>this can have a large effect. Also, try the different "levels" of
>trellis. Also try different partition analysis flags, though I think
>intra partition analysis flags usually have very little effect on speed.
>
>  
>
Indeed, we'll also evaluate different partition analysis flags. Can you 
explain what do you mean by trellis? Thanks;

>keyint isn't really for improving quality; it's used to trade off
>seekability against quality. E.g. using keyint=12 in a 24fps video makes
>it possible to seek with 0.5 second precision, but it doesn't achieve
>very efficient coding. So I don't think there's very much point in
>testing different values of it. At larger values, the rate-distortion
>effects of keyint are overwhelmed by the characteristics of the source
>video (how often scene changes occur).
>
>  
>
Indeed, but once again here: keyint is especially important (correct me 
if I'm wrong) for us to recover quickly from losses, that's the reason 
why we test it, not for a quality-improvement.


Best regards,
Davy.

-- 
This is the x264-devel mailing-list
To unsubscribe, go to: http://developers.videolan.org/lists.html



More information about the x264-devel mailing list