I just came up with the following 2 Huffman methods. I think the first method is like Adaptive Huffman but I'm not sure. Perhaps these methods already exist. I just wanted to share it, maybe they're useful.
Method 1
From a decompression point of view: Supply the characters and their frequency in the compressed file. Then build a tree from those stats. Check the first bit(s) of the data and output the right character. Then decrease that character's frequency and build a new tree again. Repeat. This way the codes will get shorter and shorter the farther you get into the data.
Method 2
This method doesn't require to store the characters and their frequency in the file already, you only need to store the size of the decompressed file or perhaps use some 'End' node or something (Although I think the file would end up bigger with another extra node). Maybe the codes I show below aren't optimised but that doesn't really matter right now, this is just to give you the idea. I'm sure this whole thing could be tweaked :) I'll use the string 'ABRACADABRA' as an example. Also, the node 'New' always has a higher importance than other 1 frequency characters (Unless like 128+ characters would have a frequency of 1, then it won't have a higher importance anymore. And when 255 characters exist in the table you can replace 'New' with the last possible character).
Since there are no characters in the table at the start, grab the first character 'A' and make the table:
Char Freq Code
---------------
New 1 0
A 1 1
Since 'A' is the very first character just output it as 8 bits (01000001).
The next character is 'B' which isn't in the table yet, so output the node 'New' with code 0 and then output 8 bits for the character 'B' (01000010) and update the table:
Char Freq Code
---------------
New 1 0
A 1 10
B 1 11
The next character is 'R' which isn't known yet either so output 'New' with code 0 and output 8 bits for 'R' (01010010) and update the table:
Char Freq Code
---------------
New 1 00
A 1 01
B 1 10
R 1 11
Next character is 'A' which is already present in the table and therefore output 01, then update the table:
Char Freq Code
---------------
A 2 0
New 1 10
B 1 110
R 1 111
Next is 'C' which isn't known so output 'New' as 10 and output 8 bits for 'C' (01000011) and update the table:
Char Freq Code
---------------
A 2 0
New 1 100
B 1 101
C 1 110
R 1 111
Next is 'A' which is known and so output 0, the table stays the same except that the frequency of 'A' is increased.
Next is 'D' which isn't known so output 100 and output 8 bits for 'D' (01001101) and update the table:
Char Freq Code
---------------
A 3 0
New 1 10
B 1 1100
C 1 1101
D 1 1110
R 1 1111
Next one is 'A' again so output 0, table stays the same except for A's frequency.
Next is 'B' so output 1100 and update the table:
Char Freq Code
---------------
A 4 0
B 2 10
New 1 1100
C 1 1101
D 1 1110
R 1 1111
Next is 'R' so output 1111 and update the table:
Char Freq Code
---------------
A 4 0
B 2 100
R 2 101
New 1 110
C 1 1110
D 1 1111
Next is the final character 'A' so output 0.
The final output will be (Minus the filesize/End Of File data):
01000001 0 01000010 0 01010010 01 10 01000011 0 100 01001101 0 1100 1111 0 = 60 bits (Original string 'ABRACADABRA' is 8 x 11 = 88 bits)