I recently reworked how zstd builds its Huffman decoding tables (not decoding itself) to avoid unpredictable branches
and speed the table building up by about ~2x [0]. This is insignificant for large decompressions, but if you're decompressing only a few KB, the table building time can dominate the actual decompression.
It sort of goes to show that while Huffman codes have been around for ages, implementations can still improve, especially as the hardware we use changes.
Had classes from David Huffman while at UC Santa Cruz. One of my favorite professors and he did not like when people constantly brought up Huffman coding.
Spent several hours talking various topics, but one of his favorite areas of exploration when I was around campus was paper folding.
Do you know how he makes such precise curved folds? I'm no expert, but love me some origami and I can't believe that I'd be able to get folds to turn out like that. Anything I would attempt would invariably have little sub-creases instead of a smooth fold.
My current project is a Huffman coder built in Zig, it's been a lot of fun to build. The Huffman tree generation is simple, but there's a lot of different nuances and variations to account for.
For example, a "canonical Huffman code" can save you space encoding the table through some clever trickery. In short, you can encode the table by counting the number of bytes that use each bit-count, and the bytes used in the file. You don't need to store the patterns at all, since in the canonical algorithm the patterns are regenerated from the bit-lengths. [1]
Right now I'm trying to implement the package-merge algorithm,[2] which will allow me to create the table without building an inefficient fully-allocated tree, and more importantly will allow me to limit the lengths of my codes (the maximum code length is n-1, where `n` is the length of your alphabet. Working with bytes and using 255 bit codes is obnoxious). Unfortunately all explanations of the algorithm I've found are very academic and mathematical, so I'm having trouble working it out.
Some of you might be interested in the short video Tom Scott made about Huffman coding.[3]
Yes, but you have to consider how to describe code lengths from 1 to 255. For example, DEFLATE only allows Huffman codes to be 0 to 15 bits long, which simplifies the number of possibilities that the table encoder needs to handle. http://www.zlib.org/rfc-deflate.html#dyn
One of the fastest Huffman compression libraries is FPC https://github.com/algorithm314/FPC/ It even has compression ratio better than some AC implementations. Better than ZSTD's
Gah! Another explanation that uses "Huffman Trees"! Nobody uses that, we all use Canonical Huffman, where all you have are the number of symbols per code length, letting you efficiently build tables. Yes, tables are used instead of trees. The trees are a distraction.
Whether or not you canonicalize your prefix code is orthogonal to whether you think of it as a tree or not. (And any prefix code can be viewed as a tree.) In fact the very article you linked says:
> The advantage of a canonical Huffman tree is that it can be encoded in fewer bits than an arbitrary tree.
To take the example from the Wikipedia article you linked, canonicalizing
B = 0
A = 11
C = 101
D = 100
into
B = 0
A = 10
C = 110
D = 111
is conceptually the same as canonicalizing the tree (treating "0" as "left" and "1" as "right"):
┌------┐
┌-----┤(root)├-----┐
│ └------┘ │
┌-┴-┐ ┌-┴-┐
│ B │ ┌-┤ ├-┐
└---┘ │ └---┘ │
┌-┴-┐ ┌-┴-┐
┌-┤ ├-┐ │ A │
│ └---┘ │ └---┘
┌-┴-┐ ┌-┴-┐
│ D │ │ C │
└---┘ └---┘
into
┌------┐
┌-----┤(root)├-----┐
│ └------┘ │
┌-┴-┐ ┌-┴-┐
│ B │ ┌-┤ ├-┐
└---┘ │ └---┘ │
┌-┴-┐ ┌-┴-┐
│ A │ ┌-┤ ├-┐
└---┘ │ └---┘ │
┌-┴-┐ ┌-┴-┐
│ C │ │ D │
└---┘ └---┘
So the trees aren't merely a "distraction" IMO: apart from being useful conceptually (e.g. in proving the optimality of the Huffman coding—this is how Knuth does it in TAOCP Vol 1, 2.3.4.5), certain applications of Huffman's algorithm (other than compression) also have the tree structure naturally arise (Knuth gives the example of choosing the optimal order for pairwise merging of N sorted lists of different lengths).
Sure, after using trees for understanding, you don't need to actually represent a tree in your data structures / source code / encoding, but that's another matter.
Additionally, after posting the comment to HN from desktop, I happened to notice within the 2-hour edit window that it looked awful on mobile (as does the comment I copied from: probably a bug with the default font stack used by Chrome on Android not using monospace versions of the box-drawing characters), so I edited to replace "─" with an ASCII "-". It looks less perfect on desktop, but less poorly aligned on mobile, so is a reasonable compromise.
I'm glad the resources on the basic trees exist, since that's what got me interested. But I would love to see more resources on canonical Huffman and especially the package-merge algorithm!
The former was confusing at first but mind-blowing when it clicked for me, and the latter looks awesome but the implementation is incomprehensible to me —I guess I'm stuck allocating trees until I can figure it out!
Canonical Huffman can be thought of as a pre-determined tree shape that's easy to re-construct. First, decide your bit lengths from your probabilities (something the most probable should get the shortest code-length, choose wisely), and then add 1s to the start of each longer to make it unambiguous.
If we have three symbols, A, B, and C, and let's say we assign bit lengths of A=1, B=2, C=2, meaning A is the most probable then we count:
A = 0
B = 10
C = 11
For code lengths A=1, B=2, C=4, D=4, E=4, then we have:
A = 0
B = 10
C = 1100
D = 1101
E = 1110
Note that all we need to send is the bit lengths (1, 2, 4, 4, 4) to the other side, and we have an algorithm to assign the bits. Though, even (1, 2, 4, 4, 4) is actually too much information, we just need to send the number of symbols for each given length: (1, 1, 0, 3)
Much faster, smaller, and generally better than sending over a whole tree.
You can even send the code lengths with at worst one bit per symbol, by sending the number of internal nodes at each level of a complete binary tree structure, this can be a concatenation of unary numbers and there's n-1 internal nodes for a complete binary tree of n leaves.
Your example isn't a complete/perfect tree, one of the length 4 symbols could be 3, for level sequence 1,1,1,2. Converted to internal node counts (excluding root) this is 1,1,1 and could be encoded 0,0,0 (commas for illustration only). That's a little boring, another example is:
A = 00
B = 01
C = 10
D = 110
E = 111
level sequence 0,3,2, internal node counts 2,1, encoded as 10,0.
I'm not sure if this is well known, I thought I'd discovered it but there's an equivalent coding in Narimani and Khosravifard 2008, "The supertree of the compact codes."
A. You can avoid encoding the number of leaves by ending the string with impossible sequence 0,11, that is 1 internal node branching into at least 3 at the next level.
B. Or, you can use this as a 1-to-1 mapping between binary strings and canonical trees. If you read 2m-1 1s after a row with m internal nodes, you know this row has 2m so the final 0 isn't needed. (e.g. 2,1 would just be 1,0 instead of 10,0). This can't be combined with A since there are no longer impossible sequences, but at most (on a tree with all symbols the same length) it only saves lg(n) bits, so if you want to save bits you're better off with A.
It's so odd that this crucial detail is always omitted from this explanation.
If you're encoding a byte stream, the receiver already knows what the set of symbols is, although not the order. Is it really more efficient to send the ordered list of symbols plus the histogram of code lengths than it is to send a length per symbol, with an implicit list of symbols? Surely not.
But DEFLATE sends the list of 288 code lengths, with all the symbols being implicit.
It also compresses that list with another layer of repetition encoding and huffman encoding.
For example, if you were encoding DEFLATE's default huffman table, you would first write it like (8, repeat 143, 9, repeat 111, 7, repeat 23, 8, repeat 7), and then you would compress that with a separate tiny huffman table.
The tiny huffman used on code lengths is itself encoded as a series of 3 bit code lengths, taking up 5-ish bytes, with the order of symbols predefined in the spec.
As long as the list of symbols is small then yes, because the symbols are a subset of the entire character list. I imagine some implementations basex encode the bytestring first then indicate the original and basex encoding instead of sending the symbol list.
Thanks! I've got the canonical stuff down pat already in my project, but I am still allocating the tree to get the bit-lengths in the first place. Once I have the package-merge algorithm up and running, the code will never allocate a tree at all for construction, plus I'll have the added benefit of length-limited codes.
Incidentally, table-based canonical Huffman works best when the "root" of the Huffman codes is stored in the more significant bits, as that simplifies the algorithm from having to do a bit string reversal and the codes remain in order in the table.
...but I believe DEFLATE goes the exact opposite way, for reasons unknown.
I believe you're getting confused. There's no bit string reversal in DEFALTE, it just changes your shift slightly. Both MSB and LSB have their advantages and disadvantages.
I meant that with the bit ordering that DEFLATE has (the root in the LSB), you can't simply add/subtract 1 to each code to get to the next one, because the carry goes the wrong way.
With a normal (ascending) canonical code, e.g. "2 codes of length 3, 2 codes of length 4" becomes
000
001
0100
0101
when read into a register. With DEFLATE, the same codes would look like
000
100
0010
1010
so you need to pre-rotate the bits in your lookup table to match.
Wonderful summary and very well explained! Those graphics could be in a CS textbook :) A bit more (or perhaps another article) on how the tree can be encoded would be useful, since the total size of your compressed example would need to include the size of the tree. Also: JPEGs also have a Huffman tables region in their binary; haven't dug too deeply, but seems like they also use Huffman coding!
Huffman coding is the bomb with the kids. Any kind of encoding / cipher stuff is well received but the application of a binary tree makes it HC more cool. I love teaching this part of the syllabus so much.
Shameless plug: A couple of years ago I wrote an implementation of the huffman coding algorithm as well while studying it, along with unit tests. I was also interested in practicing OOP. You can find the result in the link below.
Silly example. It compresses the phrase "do or do not" that has 6 symbols (d, o, r, n, t, space), builds a huffman tree just for these and then assumes significant compression by using 8 bits per character for the uncompressed case.
Wouldn't any compression algorithm be a silly example when using such a small amount of data as the overheard to communicate information about the compression would take more data than was saved and potentially more data than the original message?
I think it still suffices as an example because it would be easy to infer how this scales up to an entire book. Finer details are left out, but is that the sort of detail that should be present in a very short introductory article?
Not communicating enough info to rebuild the tree is a separate issue, but it is mentioned in the article so the reader is not left guessing. In the example however, we have a set of 6 symbols that would require 3 bits each to represent uncompressed but instead assume 8 bits. It invites fairness questions, which would distract a reader that has not been exposed to the concept before. This is an otherwise nice intro to the topic though.
My intention was to pick an example that produced a small tree with only a few leaf nodes (so that the diagram was easy to follow), but still contained some duplication.
My hope was that somebody new to the concept could then infer the results for larger inputs.
I did not intend to imply that this would be a valid use case for building a Huffman tree in practice.
What's silly about that? If I were just getting started with this kind of thing, I think this would've been a great post for me. As I haven't done this since an intro class in college, it was actually a nice quick refresher. 6 symbols is easy to keep in the head all at once.
Learning about Huffman Encoding in school was my first exposure to these sort of algorithms. I was an electrical engineering major and had little exposure to computer science at the time.
It captivated my interest immediately - it was such a simple and effective approach, and it demystified how compression algorithms worked.
I really like this overview. It's not meant to be a full discourse, more just an intro for newcomers. And I think it gets the basic idea across very effectively.
Probably the first non-trivial data structure/algorithm I had implemented (previously I was making games, where you need no algorithm more complex than rectangle intersection)
It works the *exact* same way. You process the input one byte at a time, build a histogram, construct the Huffman Tree and encode the input.
Why should it work different? Text files are just regular files where the bytes only use a specific subset of the possible values they could have.
If you do have some knowledge about the data you are encoding, you can be a little bit smarter about it: e.g. for the text section of an executable, you might work on individual instructions instead of bytes, maybe use common, prepared Huffman Trees, so you don't have to encode the tree itself.
On a side note: IIRC the Intel Management Engine does that using a proprietary Huffman Tree, backed into the hardware itself, as an obfuscation technique[1].
To circle back to your question: PNG simply feeds the pixel data into the zlib deflate() function as-is.
You ask why it should work differently but then give a good reason why it should work differently: sometimes splitting by bytes is not the best unit.
And it actually does often work differently for PNG! PNG has a handful of preprocessing options for the pixels. So in filter mode 2, for example, deflate is encoding the difference between each pixel and the pixel above it. More or less.
JPEG uses Huffman as the compression stage. Here's[1] a brief but nice overview, here's[2] a more in-depth one.
Some encoders just use a precomputed Huffman table, but you can make an optimized one as discussed here[3].
Huffman is not the most optimal compression that can be used, for example Dropbox's Lepton[4] saves an additional 20% by replacing the Huffman stage with something better.
However, since it's a lossless stage this can be done transparently, which is nice.
The main key is that you can look at any file as a string of bits! And apply Huffman coding at whatever granularity you like. For text files, you're essentially applying it at the byte level (since each character is a byte (in ASCII, anyways)). For images, you might have one byte per colour channel, RGB. Then you can apply Huffman coding at the byte level, or even at the 3 byte level to operate on entire colours as opposed to channels.
Huffman coding generally works on symbols, where a symbol can be represented by any (possibly variable-length!) string of bits. I recall reading about a compression scheme (lzw?) where the symbol table had an entry for each raw byte value, plus the encoding instructions (e.g. lookback x bytes for y bytes).
That's pretty much all of them except LZW (which is an LZ78-alike, rather than LZ77). gzip has a number of Huffman tables for literals, distance, and for building Huffman tables at runtime... Yes, the instructions for building the Huffman tables used for decompression are themselves compressed using... more Huffman tables (HCLEN)!
Oh yeah, might have been remembering gzip. All I remember is that it was one of the old/common ones. Don't know why my mind jumped to lzw instead of the obvious gzip haha.
It's the opposite: It's almost never used on bytes, because that just doesn't give you a lot of compression.
It is generally used as the final stage of some other compression algorithm, and operates on symbols generated by that algorithm. Often, this is some variation on LZ77, and the symbols are something like "bytes 0-255" in addition to various symbols that denote a match in previous data of some length and at some offset.
Indeed, this is an example where each English word in a book gets a unique symbol for the purposes of Huffman coding. Note that the Huffman output is in base 52 (abc...xyzABC...XYZ) alphabet instead of the usual binary.
Huffman coding is not used directly by PNG. PNG instead uses the regular DEFLATE algorithm from Zip (and Gzip), which uses Huffman coding as its last stage, to output the symbols created by its LZ77-based compression algorithm.
It sort of goes to show that while Huffman codes have been around for ages, implementations can still improve, especially as the hardware we use changes.
[0] https://github.com/facebook/zstd/pull/2271