Tool to compress / decompress with Huffman coding. Huffman coding is a data compression algorithm (lossless) which use a binary tree and a variable length code based on probability of appearance.
Huffman Coding - dCode
Tag(s) : Compression
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
The Huffman code uses the frequency of appearance of letters in the text, calculate and sort the characters from the most frequent to the least frequent.
Example: The message DCODEMESSAGE contains 3 times the letter E, 2 times the letters D and S, and 1 times the letters A, C, G, M and O.
The Huffman algorithm will create a tree with leaves as the found letters and for value (or weight) their number of occurrences in the message. To create this tree, look for the 2 weakest nodes (smaller weight) and hook them to a new node whose weight is the sum of the 2 nodes. Repeat the process until having only one node, which will become the root (and that will have as weight the total number of letters of the message).
The binary code of each character is then obtained by browsing the tree from the root to the leaves and noting the path (0 or 1) to each node.
Decryption of the Huffman code requires knowledge of the matching tree or dictionary (characters <-> binary codes)
To decrypt, browse the tree from root to leaves (usually top to bottom) until you get an existing leaf (or a known value in the dictionary).
Example: Deocde the message 00100010010111001111, search for 0 gives no correspondence, then continue with 00 which is code of the letter D, then 1 (does not exist), then 10 (does not exist), then 100 (code for C), etc.
The plain message is' DCODEMOI'
By applying the algorithm of the Huffman coding, the most frequent characters (with greater occurrence) are coded with the smaller binary words, thus, the size used to code them is minimal, which increases the compression.
The compression ratio often exceeds 50%
By making assumptions about the length of the message and the size of the binary words, it is possible to search for the probable list of words used by Huffman.
It should then be associated with the right letters, which represents a second difficulty for decryption and certainly requires automatic methods.
There are variants of Huffman when creating the tree / dictionary.
The dictionary can be static: each character / byte has a predefined code and is known or published in advance (so it does not need to be transmitted)
The dictionary can be semi-adaptive: the content is analyzed to calculate the frequency of each character and an optimized tree is used for encoding (it must then be transmitted for decoding). This is the version implemented on dCode
The dictionary can be adaptive: from a known tree (published before and therefore not transmitted) it is modified during compression and optimized as and when. The calculation time is much longer but often offers a better compression ratio.
It was published in 1952 by David Albert Huffman.