Hacker News new | past | comments | ask | show | jobs | submit login
What Is Huffman Coding? (baseclass.io)
282 points by davethedevguy on Feb 22, 2021 | hide | past | favorite | 69 comments



I recently reworked how zstd builds its Huffman decoding tables (not decoding itself) to avoid unpredictable branches and speed the table building up by about ~2x [0]. This is insignificant for large decompressions, but if you're decompressing only a few KB, the table building time can dominate the actual decompression.

It sort of goes to show that while Huffman codes have been around for ages, implementations can still improve, especially as the hardware we use changes.

[0] https://github.com/facebook/zstd/pull/2271


Nice.


Had classes from David Huffman while at UC Santa Cruz. One of my favorite professors and he did not like when people constantly brought up Huffman coding.

Spent several hours talking various topics, but one of his favorite areas of exploration when I was around campus was paper folding.

A couple of links with examples:

- https://www.cise.ufl.edu/~manuel/huffman/index.html

- https://collections.mitmuseum.org/collection/david-a-huffman...


Do you know how he makes such precise curved folds? I'm no expert, but love me some origami and I can't believe that I'd be able to get folds to turn out like that. Anything I would attempt would invariably have little sub-creases instead of a smooth fold.


Normally you would use a creaser, which is like a dull knife that is pressed down into the paper, perhaps over a slightly giving surface.


Images don't load from the second link for me. Sad museum? :/ Chrome on Android btw.


from Firefox PC works fine :)


My Firefox says for media-internal.mitmuseum.org where the images are hosted on

The certificate is only valid for the following names: *.s3.amazonaws.com, s3.amazonaws.com

Let's hope you happened to get served a valid cert, and not that the cert validation is disabled or broken in your Firefox.


Yep, Firefox is telling me that some parts of this page are not secure, such as images. I guess you may have more strict security settings.


My current project is a Huffman coder built in Zig, it's been a lot of fun to build. The Huffman tree generation is simple, but there's a lot of different nuances and variations to account for.

For example, a "canonical Huffman code" can save you space encoding the table through some clever trickery. In short, you can encode the table by counting the number of bytes that use each bit-count, and the bytes used in the file. You don't need to store the patterns at all, since in the canonical algorithm the patterns are regenerated from the bit-lengths. [1]

Right now I'm trying to implement the package-merge algorithm,[2] which will allow me to create the table without building an inefficient fully-allocated tree, and more importantly will allow me to limit the lengths of my codes (the maximum code length is n-1, where `n` is the length of your alphabet. Working with bytes and using 255 bit codes is obnoxious). Unfortunately all explanations of the algorithm I've found are very academic and mathematical, so I'm having trouble working it out.

Some of you might be interested in the short video Tom Scott made about Huffman coding.[3]

1. https://en.wikipedia.org/wiki/Canonical_Huffman_code#Encodin...

2. https://en.wikipedia.org/wiki/Package-merge_algorithm

3. https://www.youtube.com/watch?v=JsTptu56GM8


Aren't 255 bit codes the optimal choice for an input distribution skewed enough?


Yes, but you have to consider how to describe code lengths from 1 to 255. For example, DEFLATE only allows Huffman codes to be 0 to 15 bits long, which simplifies the number of possibilities that the table encoder needs to handle. http://www.zlib.org/rfc-deflate.html#dyn


One of the fastest Huffman compression libraries is FPC https://github.com/algorithm314/FPC/ It even has compression ratio better than some AC implementations. Better than ZSTD's


Gah! Another explanation that uses "Huffman Trees"! Nobody uses that, we all use Canonical Huffman, where all you have are the number of symbols per code length, letting you efficiently build tables. Yes, tables are used instead of trees. The trees are a distraction.

https://en.wikipedia.org/wiki/Canonical_Huffman_code


Whether or not you canonicalize your prefix code is orthogonal to whether you think of it as a tree or not. (And any prefix code can be viewed as a tree.) In fact the very article you linked says:

> The advantage of a canonical Huffman tree is that it can be encoded in fewer bits than an arbitrary tree.

To take the example from the Wikipedia article you linked, canonicalizing

    B = 0
    A = 11
    C = 101
    D = 100
into

    B = 0
    A = 10
    C = 110
    D = 111
is conceptually the same as canonicalizing the tree (treating "0" as "left" and "1" as "right"):

            ┌------┐
      ┌-----┤(root)├-----┐
      │     └------┘     │
    ┌-┴-┐              ┌-┴-┐
    │ B │            ┌-┤   ├-┐
    └---┘            │ └---┘ │
                   ┌-┴-┐   ┌-┴-┐
                 ┌-┤   ├-┐ │ A │
                 │ └---┘ │ └---┘
               ┌-┴-┐   ┌-┴-┐
               │ D │   │ C │
               └---┘   └---┘
into

            ┌------┐
      ┌-----┤(root)├-----┐
      │     └------┘     │
    ┌-┴-┐              ┌-┴-┐
    │ B │            ┌-┤   ├-┐
    └---┘            │ └---┘ │
                   ┌-┴-┐   ┌-┴-┐    
                   │ A │ ┌-┤   ├-┐  
                   └---┘ │ └---┘ │  
                       ┌-┴-┐   ┌-┴-┐
                       │ C │   │ D │
                       └---┘   └---┘
So the trees aren't merely a "distraction" IMO: apart from being useful conceptually (e.g. in proving the optimality of the Huffman coding—this is how Knuth does it in TAOCP Vol 1, 2.3.4.5), certain applications of Huffman's algorithm (other than compression) also have the tree structure naturally arise (Knuth gives the example of choosing the optimal order for pairwise merging of N sorted lists of different lengths).

Sure, after using trees for understanding, you don't need to actually represent a tree in your data structures / source code / encoding, but that's another matter.


what did you use to render these beautiful trees?


To retrace my steps: I searched on google for [ascii tree generator] and, alongside many results about generating output for directory trees / folder structures (like that of tree: http://mama.indstate.edu/users/ice/tree/), I found this "Show HN" submission: https://news.ycombinator.com/item?id=21042390 . I installed and tried the linked project (https://github.com/spandanb/ascii_tree) and it kind of works, though I had several issues (posted later at https://github.com/spandanb/ascii_tree/issues/3). So I gave up and just copied the tree from that thread's top comment, by user kps (https://news.ycombinator.com/item?id=21043091) and manually edited the tree in Emacs to add new nodes by copying things around.

Additionally, after posting the comment to HN from desktop, I happened to notice within the 2-hour edit window that it looked awful on mobile (as does the comment I copied from: probably a bug with the default font stack used by Chrome on Android not using monospace versions of the box-drawing characters), so I edited to replace "─" with an ASCII "-". It looks less perfect on desktop, but less poorly aligned on mobile, so is a reasonable compromise.


Thanks for explaining the steps you took. I'm always looking out for ways to approach unknown unknowns and this is a great help.


Agreed! I’ve not seen ascii art that beautiful on HN before.


Moffat and Turpin's 1997 paper On The Implementation of Minimum Redundancy Prefix Codes contains all the usual tricks and then some: https://github.com/tpn/pdfs/blob/master/On%20the%20Implement...


See also Charles Bloom's rants [1] describing some of less known ideas from the paper.

[1] https://cbloomrants.blogspot.com/2010/08/08-12-10-lost-huffm...


I'm glad the resources on the basic trees exist, since that's what got me interested. But I would love to see more resources on canonical Huffman and especially the package-merge algorithm!

The former was confusing at first but mind-blowing when it clicked for me, and the latter looks awesome but the implementation is incomprehensible to me —I guess I'm stuck allocating trees until I can figure it out!


Canonical Huffman can be thought of as a pre-determined tree shape that's easy to re-construct. First, decide your bit lengths from your probabilities (something the most probable should get the shortest code-length, choose wisely), and then add 1s to the start of each longer to make it unambiguous.

If we have three symbols, A, B, and C, and let's say we assign bit lengths of A=1, B=2, C=2, meaning A is the most probable then we count:

    A = 0
    B = 10
    C = 11
For code lengths A=1, B=2, C=4, D=4, E=4, then we have:

    A = 0
    B = 10
    C = 1100
    D = 1101
    E = 1110
Note that all we need to send is the bit lengths (1, 2, 4, 4, 4) to the other side, and we have an algorithm to assign the bits. Though, even (1, 2, 4, 4, 4) is actually too much information, we just need to send the number of symbols for each given length: (1, 1, 0, 3)

Much faster, smaller, and generally better than sending over a whole tree.


You can even send the code lengths with at worst one bit per symbol, by sending the number of internal nodes at each level of a complete binary tree structure, this can be a concatenation of unary numbers and there's n-1 internal nodes for a complete binary tree of n leaves.

Your example isn't a complete/perfect tree, one of the length 4 symbols could be 3, for level sequence 1,1,1,2. Converted to internal node counts (excluding root) this is 1,1,1 and could be encoded 0,0,0 (commas for illustration only). That's a little boring, another example is:

    A = 00
    B = 01
    C = 10
    D = 110
    E = 111
level sequence 0,3,2, internal node counts 2,1, encoded as 10,0.

I'm not sure if this is well known, I thought I'd discovered it but there's an equivalent coding in Narimani and Khosravifard 2008, "The supertree of the compact codes."

A. You can avoid encoding the number of leaves by ending the string with impossible sequence 0,11, that is 1 internal node branching into at least 3 at the next level.

B. Or, you can use this as a 1-to-1 mapping between binary strings and canonical trees. If you read 2m-1 1s after a row with m internal nodes, you know this row has 2m so the final 0 isn't needed. (e.g. 2,1 would just be 1,0 instead of 10,0). This can't be combined with A since there are no longer impossible sequences, but at most (on a tree with all symbols the same length) it only saves lg(n) bits, so if you want to save bits you're better off with A.


Not sure I understand here - how would (1,1,0,3) disambiguate between different streams of data with bit lengths of (1,2,4,4,4) and (1,4,2,4,4)?


If you're sending just the counts then that would be in addition to an ordered list of symbols.


It's so odd that this crucial detail is always omitted from this explanation.

If you're encoding a byte stream, the receiver already knows what the set of symbols is, although not the order. Is it really more efficient to send the ordered list of symbols plus the histogram of code lengths than it is to send a length per symbol, with an implicit list of symbols? Surely not.


Like most things in compression, it depends.

But DEFLATE sends the list of 288 code lengths, with all the symbols being implicit.

It also compresses that list with another layer of repetition encoding and huffman encoding.

For example, if you were encoding DEFLATE's default huffman table, you would first write it like (8, repeat 143, 9, repeat 111, 7, repeat 23, 8, repeat 7), and then you would compress that with a separate tiny huffman table.

The tiny huffman used on code lengths is itself encoded as a series of 3 bit code lengths, taking up 5-ish bytes, with the order of symbols predefined in the spec.


As long as the list of symbols is small then yes, because the symbols are a subset of the entire character list. I imagine some implementations basex encode the bytestring first then indicate the original and basex encoding instead of sending the symbol list.


Thanks! I've got the canonical stuff down pat already in my project, but I am still allocating the tree to get the bit-lengths in the first place. Once I have the package-merge algorithm up and running, the code will never allocate a tree at all for construction, plus I'll have the added benefit of length-limited codes.


Incidentally, table-based canonical Huffman works best when the "root" of the Huffman codes is stored in the more significant bits, as that simplifies the algorithm from having to do a bit string reversal and the codes remain in order in the table.

...but I believe DEFLATE goes the exact opposite way, for reasons unknown.


I believe you're getting confused. There's no bit string reversal in DEFALTE, it just changes your shift slightly. Both MSB and LSB have their advantages and disadvantages.

As always, ryg has great posts on this stuff: https://fgiesen.wordpress.com/2018/02/19/reading-bits-in-far...


I meant that with the bit ordering that DEFLATE has (the root in the LSB), you can't simply add/subtract 1 to each code to get to the next one, because the carry goes the wrong way.

With a normal (ascending) canonical code, e.g. "2 codes of length 3, 2 codes of length 4" becomes

    000
    001
    0100
    0101
when read into a register. With DEFLATE, the same codes would look like

    000
    100
    0010
    1010
so you need to pre-rotate the bits in your lookup table to match.


"There's no bit string reversal in DEFALTE"

The DEFLATE RFC describes it as such.

"Huffman codes in bit-reversed order (i.e., with the first bit of the code in the relative LSB position)."


Wonderful summary and very well explained! Those graphics could be in a CS textbook :) A bit more (or perhaps another article) on how the tree can be encoded would be useful, since the total size of your compressed example would need to include the size of the tree. Also: JPEGs also have a Huffman tables region in their binary; haven't dug too deeply, but seems like they also use Huffman coding!


As long as we're on this topic, might as well pivot to information theory. https://www.youtube.com/playlist?list=PLruBu5BI5n4aFpG32iMbd...


Thanks for the link! I started watching and this looks like an interesting lecture series. (The book looks interesting too.)


Huffman coding is the bomb with the kids. Any kind of encoding / cipher stuff is well received but the application of a binary tree makes it HC more cool. I love teaching this part of the syllabus so much.


> This is 29 bits instead of 96, with no data loss. Great!

What's the reasoning of leaving out the tree structure needed for decoding in this argument?


It's discussed later. It talks about the fact that both sides need to have the same tree, and ways to accomplish that.


Shameless plug: A couple of years ago I wrote an implementation of the huffman coding algorithm as well while studying it, along with unit tests. I was also interested in practicing OOP. You can find the result in the link below.

- https://github.com/TCGV/HuffmanCoding


Good stuff. Thanks for sharing.


Silly example. It compresses the phrase "do or do not" that has 6 symbols (d, o, r, n, t, space), builds a huffman tree just for these and then assumes significant compression by using 8 bits per character for the uncompressed case.


Wouldn't any compression algorithm be a silly example when using such a small amount of data as the overheard to communicate information about the compression would take more data than was saved and potentially more data than the original message?

I think it still suffices as an example because it would be easy to infer how this scales up to an entire book. Finer details are left out, but is that the sort of detail that should be present in a very short introductory article?


Not communicating enough info to rebuild the tree is a separate issue, but it is mentioned in the article so the reader is not left guessing. In the example however, we have a set of 6 symbols that would require 3 bits each to represent uncompressed but instead assume 8 bits. It invites fairness questions, which would distract a reader that has not been exposed to the concept before. This is an otherwise nice intro to the topic though.


I take your point.

My intention was to pick an example that produced a small tree with only a few leaf nodes (so that the diagram was easy to follow), but still contained some duplication.

My hope was that somebody new to the concept could then infer the results for larger inputs.

I did not intend to imply that this would be a valid use case for building a Huffman tree in practice.


It's a nice article, don't get me wrong. Maybe it would make sense to establish a more fair baseline.


What's silly about that? If I were just getting started with this kind of thing, I think this would've been a great post for me. As I haven't done this since an intro class in college, it was actually a nice quick refresher. 6 symbols is easy to keep in the head all at once.


Learning about Huffman Encoding in school was my first exposure to these sort of algorithms. I was an electrical engineering major and had little exposure to computer science at the time.

It captivated my interest immediately - it was such a simple and effective approach, and it demystified how compression algorithms worked.

I really like this overview. It's not meant to be a full discourse, more just an intro for newcomers. And I think it gets the basic idea across very effectively.


I implemented Huffman Coding when I was 14 years old: http://benibela.de/sources_en.html#huffman.zip

Probably the first non-trivial data structure/algorithm I had implemented (previously I was making games, where you need no algorithm more complex than rectangle intersection)


I am curious as to how this works for filetypes other than text files- what are the contents of each node in the tree for, say, a PNG file?


It works the *exact* same way. You process the input one byte at a time, build a histogram, construct the Huffman Tree and encode the input.

Why should it work different? Text files are just regular files where the bytes only use a specific subset of the possible values they could have.

If you do have some knowledge about the data you are encoding, you can be a little bit smarter about it: e.g. for the text section of an executable, you might work on individual instructions instead of bytes, maybe use common, prepared Huffman Trees, so you don't have to encode the tree itself.

On a side note: IIRC the Intel Management Engine does that using a proprietary Huffman Tree, backed into the hardware itself, as an obfuscation technique[1].

To circle back to your question: PNG simply feeds the pixel data into the zlib deflate() function as-is.

[1] https://en.wikipedia.org/wiki/Intel_Management_Engine#Design


You ask why it should work differently but then give a good reason why it should work differently: sometimes splitting by bytes is not the best unit.

And it actually does often work differently for PNG! PNG has a handful of preprocessing options for the pixels. So in filter mode 2, for example, deflate is encoding the difference between each pixel and the pixel above it. More or less.


But you still apply Huffman coding to the bytes after filtering.

I think this is true even when using 16 bits per sample, but i'm not sure.

From what i remember, in gzip, Huffman coding is applied to the offsets of repeated blocks, which are not single bytes.


JPEG uses Huffman as the compression stage. Here's[1] a brief but nice overview, here's[2] a more in-depth one.

Some encoders just use a precomputed Huffman table, but you can make an optimized one as discussed here[3].

Huffman is not the most optimal compression that can be used, for example Dropbox's Lepton[4] saves an additional 20% by replacing the Huffman stage with something better.

However, since it's a lossless stage this can be done transparently, which is nice.

[1]: http://www.robertstocker.co.uk/jpeg/jpeg_new_11.htm

[2]: https://www.impulseadventure.com/photo/jpeg-huffman-coding.h...

[3]: https://www.impulseadventure.com/photo/optimized-jpeg.html

[4]: https://dropbox.tech/infrastructure/lepton-image-compression...


The main key is that you can look at any file as a string of bits! And apply Huffman coding at whatever granularity you like. For text files, you're essentially applying it at the byte level (since each character is a byte (in ASCII, anyways)). For images, you might have one byte per colour channel, RGB. Then you can apply Huffman coding at the byte level, or even at the 3 byte level to operate on entire colours as opposed to channels.


Huffman coding generally works on symbols, where a symbol can be represented by any (possibly variable-length!) string of bits. I recall reading about a compression scheme (lzw?) where the symbol table had an entry for each raw byte value, plus the encoding instructions (e.g. lookback x bytes for y bytes).


That's pretty much all of them except LZW (which is an LZ78-alike, rather than LZ77). gzip has a number of Huffman tables for literals, distance, and for building Huffman tables at runtime... Yes, the instructions for building the Huffman tables used for decompression are themselves compressed using... more Huffman tables (HCLEN)!

Anything to save a few bits...


Oh yeah, might have been remembering gzip. All I remember is that it was one of the old/common ones. Don't know why my mind jumped to lzw instead of the obvious gzip haha.


It's generally used on bytes, but as others have said, it can be any symbol. Even on complete words!

https://www.nayuki.io/page/huffman-coding-english-words


It's the opposite: It's almost never used on bytes, because that just doesn't give you a lot of compression.

It is generally used as the final stage of some other compression algorithm, and operates on symbols generated by that algorithm. Often, this is some variation on LZ77, and the symbols are something like "bytes 0-255" in addition to various symbols that denote a match in previous data of some length and at some offset.


Indeed, this is an example where each English word in a book gets a unique symbol for the purposes of Huffman coding. Note that the Huffman output is in base 52 (abc...xyzABC...XYZ) alphabet instead of the usual binary.


One way could be to split the image file into RGB or CMYK channels, and then compress the relative brightness of pixels per channel.


Huffman coding is not used directly by PNG. PNG instead uses the regular DEFLATE algorithm from Zip (and Gzip), which uses Huffman coding as its last stage, to output the symbols created by its LZ77-based compression algorithm.


To grok a concept it helps to actually do it. You can take on a coding challenge for Huffman Encoding at Codewars:

https://www.codewars.com/kata/54cf7f926b85dcc4e2000d9d


Am I the only one who clicked on this thinking it was about Steve 'Spez' Huffman's current side project?


It's a compression technique used in stuff like PNG and gzip, saved you a click.


It's an optimal prefix code. If you already know what that sentence means then you can save the click.


The historical significance of it is a hell of a lot more than that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: