[x265] New entropy coding - faster than Huffman, compression rate like arithmetic
Jarosław Duda
dudaj at purdue.edu
Mon Dec 30 05:35:49 CET 2013
On 12/29/2013 11:12 PM, chen wrote:
> The idea seems like Range Coder.
It can encode large alphabets like Range Coding, but is a few times
faster - single decoding step is for example just:
nbBits = decodeTable[state].nbBits;
symbol = decodeTable[state].symbol;
state = decodeTable[state].newState | (bitStream & mask[nbBits]);
bitStream = bitStream >> nbBits;
where mask[nbBits]=(1<<nbBits)-1
so there is no branching, no multiplication - just pure stream decoding:
a single table use and a few binary operations per symbol.
Probabilities of symbols are approximated by newState/oldState - by
rational numbers, very close to the expected probabilities - we get
rates like in arithmetic.
The cost is that for every probability distribution (context), we need
to store such coding table - let say 1-16kB for 256 size alphabet.
So for image/video coding, I thought about grouping a few positions in
the matrix of quantized DCT coefficients: e.g. if in 1st position there
are 10 possibilities, 5 in 2nd and 3rd, we group these 3 choices into
250 size alphabet: (c1,c2,c3) choice corresponds to 25*c1+5*c2+c3 symbol.
We fix ANS table for each of such grouping for some optimized
probability distribution - for free it can additionally contain
correlations between these positions.
If the total number of such cases (different ANS tables we could use for
fixed coder parameters) is smaller than let say a thousand, all of them
could fit into 1MB (cache).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.videolan.org/pipermail/x265-devel/attachments/20131229/487f0195/attachment.html>
More information about the x265-devel
mailing list