Page 1 of 4 123 ... LastLast
Results 1 to 30 of 119

Thread: NNCP: Lossless Data Compression with Neural Networks

  1. #1
    Member
    Join Date
    Dec 2014
    Location
    Berlin
    Posts
    29
    Thanks
    36
    Thanked 26 Times in 12 Posts

    NNCP: Lossless Data Compression with Neural Networks

    NNCP: Lossless Data Compression with Neural Networks

    Similar to CMIX or the many paq-variants but seems to be more simple and efficient:
    https://bellard.org/nncp/

  2. Thanks (10):

    byronknoll (7th April 2019),Cyan (7th April 2019),Darek (7th April 2019),encode (20th April 2019),Hakan Abbas (7th April 2019),hexagone (6th April 2019),Mauro Vezzosi (6th April 2019),schnaader (6th April 2019),Shelwien (6th April 2019),xinix (7th April 2019)

  3. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    I managed to compile it on windows, but there seem to be issues with __fastcall convention differences
    (full source is not available - there're binary libs compiled on linux).
    But at least preprocessor should work - it does something interesting too.

    Code:
    NNCP		 16,924,569 128,292,351
    CMIX (v17)	 14,877,373 116,394,271
    durilca          16,209,167 127,377,411 
    lstm-compress v3 20,318,653 173,874,407
    Did anybody test lstm-compress with DRT-processed enwik?
    Attached Files Attached Files

  4. Thanks:

    xinix (7th April 2019)

  5. #3
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    229
    Thanks
    109
    Thanked 107 Times in 66 Posts
    The NNCP paper is great! It contains a bunch of interesting research:

    - There are several differences in the LSTM architecture NNCP uses compared to cmix/lstm-compress. I will try experimenting with using some of the ideas in cmix.
    - The compression speed is really impressive: 10x faster than lstm-compress with comparable compression rate. It is unfortunate that the LibNC library is closed source. I tried hard to optimize the LSTM speed in cmix. With faster speed, the model size can be increased (which improves compression rate).
    - It is interesting to see results from a transformer model. I am disappointed to see that it performs worse than LSTM! Despite the worse performance, I think it is still worth integrating into paq8/cmix. Since the architecture is so different from LSTM, the predictions from the two models would combine well together.
    - I am surprised to see that the large models didn't use the DRT preprocessing from phda/cmix. I am planning to investigate the NNCP preprocessor - hopefully some of the ideas there can be used to improve the cmix preprocessor.

  6. Thanks (3):

    Bulat Ziganshin (7th April 2019),jibz (7th April 2019),Shelwien (7th April 2019)

  7. #4
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    I'm still trying to run nncp on windows, by adding sysv_abi wrappers for functions used by binary library.
    It mostly stopped simply crashing and now instead infinitely creates threads in a loop.

    > The compression speed is really impressive

    It seems to contain MT and vector optimizations.
    I think he's trying to use this project to advertise his NN library, so seeing the source is unlikely.

    > I am surprised to see that the large models didn't use the DRT preprocessing from phda/cmix

    His preprocessor performs some kind of iterative dictionary optimization.
    I tried running it like this: preprocess.exe c lpqdict0.dic enwik_text2 enwik_text2_nnt 44814 1
    It took quite a while, but unexpectedly overwritten the dictionary file and tried to save something to /tmp/word1.txt, but failed because windows.

  8. Thanks (4):

    Bulat Ziganshin (7th April 2019),Cyan (7th April 2019),Darek (7th April 2019),xinix (7th April 2019)

  9. #5
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    229
    Thanks
    109
    Thanked 107 Times in 66 Posts
    Quote Originally Posted by Shelwien View Post
    Did anybody test lstm-compress with DRT-processed enwik?
    lstm-compress uses the same DRT preprocessing as cmix. lstm-compress v3 uses the dictionary from phda.

  10. Thanks:

    Shelwien (7th April 2019)

  11. #6
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    http://nishi.dreamhosters.com/u/nncp_v0.rar
    Kinda works, but there're memory leaks (it crashes if i do anything in free()) and I suspect it might require a cpu with AVX512 :)
    At least it has a line "Your CPU does not support AVX2+FMA - exiting".

    Btw, it seems to have problems with entropy coding? (see attached picture)
    cdm can compress my test output from 1597 to 1571.
    Can somebody reproduce the 16.9M result and upload the compressed file somewhere?
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	nncp0.jpg 
Views:	141 
Size:	906.3 KB 
ID:	6557  

  12. Thanks:

    xinix (7th April 2019)

  13. #7
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    229
    Thanks
    109
    Thanked 107 Times in 66 Posts
    I am having trouble compiling it in Ubuntu:

    Code:
    gcc  -o nncp nncp.o cp_utils.o arith.o libnc.a -lm -lpthread
    /usr/bin/ld: libnc.a(libnc.o): relocation R_X86_64_32S against symbol `tab_mask32' can not be used when making a PIE object; recompile with -fPIC
    /usr/bin/ld: libnc.a(job.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
    /usr/bin/ld: final link failed: Nonrepresentable section on output
    collect2: error: ld returned 1 exit status
    Makefile:38: recipe for target 'nncp' failed
    I tried recompiling with -fPIC, but got the same error message. Let me know if someone figures out how to compile in Linux.

  14. #8
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    962
    Thanks
    573
    Thanked 397 Times in 295 Posts
    Quote Originally Posted by Shelwien View Post
    Did anybody test lstm-compress with DRT-processed enwik?
    My score of enwik8.drt test - option with dictionary:
    60824424 bytes -> 20463628 bytes in 18667.75 s.

    and option w/o using dictionary
    60824424 bytes -> 20474425 bytes in 18304.31 s.
    Last edited by Darek; 7th April 2019 at 16:49.

  15. Thanks:

    Shelwien (7th April 2019)

  16. #9
    Member
    Join Date
    Sep 2015
    Location
    Italy
    Posts
    249
    Thanks
    108
    Thanked 146 Times in 107 Posts
    nncp.pdf, pag. 3: NNCP small uses 3 layers x 90 cells like lstm-compress, but it needs 12.1 M instead of lstm-compress 6864 KB / 6452 K (depending by the computer I use) to compress ENWIK8 (in LTCB page there is 9 MB).

    > Kinda works, but there're memory leaks (it crashes if i do anything in free()) and I suspect it might require a cpu with AVX512 :)
    > At least it has a line "Your CPU does not support AVX2+FMA - exiting".
    nncp.pdf, pag. 5: "On x86 CPUs, the AVX2 + FMA instructions are mandatory in order to get correct performance.".

  17. #10
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    523
    Thanks
    198
    Thanked 750 Times in 304 Posts
    Quote Originally Posted by pothos2 View Post
    Similar to CMIX or the many paq-variants but seems to be more simple and efficient:
    It's just a properly performance-optimized LSTM model, so yes, similar to the one in cmix and more efficient in terms of speed, but a lot less efficient than paq. From the paper:
    Regarding the speed, our results show that our implementation is much faster than lstm-compress[8] with a similar model and gain. The speed improvement comes from the more optimized matrix multiplication implementation and the larger batch size (16 instead of 1).
    The only thing of interest here would be the LibNC library, which could help either speed up the LSTM model in cmix a lot, or allow us to run a much higher complexity model with about the same runtime. But since the library is closed-source, and that when Fabrice doesn't open-source something it usually means he has a commercial interest in it, I'm not holding my breath.

    Quote Originally Posted by byronknoll View Post
    - The compression speed is really impressive: 10x faster than lstm-compress with comparable compression rate. It is unfortunate that the LibNC library is closed source. I tried hard to optimize the LSTM speed in cmix. With faster speed, the model size can be increased (which improves compression rate).
    See above. Also, from the paper:
    On x86 CPUs, the AVX2 + FMA instructions are mandatory in order to get correct performance.
    So, at least for now, the massive speedup is only possible on a very small subset of processors.

    Quote Originally Posted by byronknoll View Post
    - It is interesting to see results from a transformer model. I am disappointed to see that it performs worse than LSTM! Despite the worse performance, I think it is still worth integrating into paq8/cmix. Since the architecture is so different from LSTM, the predictions from the two models would combine well together.
    As I've stated before, in my opinion, both architectures are horribly inefficient in terms of processing needs. Here, the large LSTM model is almost as slow as cmix and can't even beat paq8hp12any on enwik8. They're only efficient in terms of memory usage per compression gain.

  18. #11
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    744
    Thanked 668 Times in 361 Posts
    The most exciting outcome from that thread for me is that Eugene managed to use Linux .so library in Windows executable

  19. #12
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    238
    Thanks
    95
    Thanked 47 Times in 31 Posts
    Man, Fabrice is some kind of polymath genius. He never stops creating.

  20. #13
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    Compiled on debian without any problems, ELF binaries here: http://nishi.dreamhosters.com/u/nncp1.rar

    As to compiling problems, it might help to
    1) extract .o files from libnc.a (with 7-zip)
    2) process them with agner objconv: https://www.agner.org/optimize/objconv.zip
    objconv.exe -fcoff64 libnc.o libnc_c.o
    objconv.exe -ew2036 -fcoff64 libnc.o libnc_c.o
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	nncp1.png 
Views:	80 
Size:	119.1 KB 
ID:	6560  

  21. #14
    Member
    Join Date
    Sep 2015
    Location
    Italy
    Posts
    249
    Thanks
    108
    Thanked 146 Times in 107 Posts
    Since 2019/04/13 there is an official version for Windows, it needs a CPU that supports AVX2+FMA:
    https://bellard.org/nncp/
    https://bellard.org/nncp/nncp-2019-04-13-win64.zip
    Last edited by Mauro Vezzosi; 16th April 2019 at 12:08. Reason: Changed https://bellard.org/ to https://bellard.org/nncp/

  22. Thanks (2):

    Darek (16th April 2019),Shelwien (16th April 2019)

  23. #15
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    There's also a .lib file in https://bellard.org/nncp/nncp-2019-04-13.tar.gz
    But save_words_debug(s, "/tmp/word1.txt", char_freq, buf_size); still remains, so preprocessor doesn't save its output since /tmp doesn't exist.
    And test script doesn't work properly either ("all" mode doesn't preprocess and then compress the file).

    Test script:
    Code:
    set file=enwik8
    set ThN=4
    .\preprocess c _tmp_word.txt %file% _tmp_out.bin 16384 64
    set large=-n_layer 5 -hidden_size 352 -n_symb 16388 -full_connect 1 -lr 6e-3
    .\nncp -T %ThN% %large% c %file% _tmp_out-nncp.bin
    .\nncp -T %ThN% %large% d _tmp_out-nncp.bin _tmp_out-nncp.txt
    Got 16,984,458 with my script. Maybe he started with cmix -s output? Or result in paper needs -T 1?
    Well, at least its not compressible - I guess that's only a problem with default options.
    Decompression verified.

    As to AVX512, there's https://software.intel.com/system/fi...11-win.tar.bz2
    You can run
    Code:
    sde.exe -avx512 -- nncp.exe c input output
    Attached Files Attached Files

  24. #16
    Member
    Join Date
    Sep 2015
    Location
    Italy
    Posts
    249
    Thanks
    108
    Thanked 146 Times in 107 Posts
    NNCP 2019/04/13, default options, without preprocessing.
    nncp c InputFile OutputFile
    cell=LSTM-C n_layer=4 hidden_size=352 batch_size=16 time_steps=20 n_symb=256 ln=1 fc=0 sgd_opt=adam lr=4.000e-003 beta1=0.000000 beta2=0.999900 eps=1.000e-005 n_params=5.28M n_params_nie=3.84M mem=79.1MB (Windows Task Manager reports 85900-88220 K)
    Code:
               Maximum Compression
       818.840 A10.jpg
     1.189.590 AcroRd32.exe
       465.370 english.dic
     3.671.152 FlashMX.pdf
       653.690 FP.LOG
     1.496.712 MSO97.DLL
       783.988 ohs.doc
       706.281 rafale.bmp
       547.304 vcfiu.hlp
       485.039 world95.txt
    10.817.966 Total
     
               ENWIK
            39 ENWIK-1 (size 0)
           139 ENWIK0 (*)
           139 ENWIK1 (*)
           236 ENWIK2
           829 ENWIK3
         4.795 ENWIK4
        34.264 ENWIK5
       260.738 ENWIK6
     2.231.375 ENWIK7
    (*) The NNCP decompression halts for all file size between 1 and 15 bytes, only once it wrote "Assertion failed!", "Program: ...\nncp.exe", "File: libnc.c, Line 4941", "Expression: c >= 0 && c < w->dims[1]".
    Last edited by Mauro Vezzosi; 20th April 2019 at 00:17. Reason: Added ENWIK

  25. #17
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    962
    Thanks
    573
    Thanked 397 Times in 295 Posts
    Some my tests on my testset compared to latest lstm-compress v3 and CMo7 -c7 option.
    Looks that lstmc gives 0.7% better results than just lstm.

    There also table with speed comparison according to threads used.
    It looks that 3, 4 and 5 threads have similar speed with very slightly win of 4 threads.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	nncp_default.jpg 
Views:	105 
Size:	150.2 KB 
ID:	6576   Click image for larger version. 

Name:	nncp_timings.jpg 
Views:	90 
Size:	13.3 KB 
ID:	6577  

  26. #18
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    962
    Thanks
    573
    Thanked 397 Times in 295 Posts
    Based on tips from Fabrice Bellard, I've tested MaximumCompression on nncp with option "-batch_size 8".

    Scores are better than default setting ("-batch_size 16") - this options gains 100KB (about 1%). Here is a comparison with Mauro's default option scores:
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	nncp_max_comp_bs_8.jpg 
Views:	72 
Size:	30.7 KB 
ID:	6584  

  27. #19
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    1. Did he use cmix -s enwik8 as input for his preprocessor? His enwik8 result is better than what I've got from his test script.
    2. Does the number of threads affect the compression?

  28. #20
    Member
    Join Date
    Sep 2015
    Location
    Italy
    Posts
    249
    Thanks
    108
    Thanked 146 Times in 107 Posts
    > 1. Did he use cmix -s enwik8 as input for his preprocessor? His enwik8 result is better than what I've got from his test script.
    Looks like he used his own processor.
    From readme.txt:
    4.2) Large models
    enwik8 preprocessing:
    ./preprocess c out.words enwik8 out.pre 16384 64
    enwik9 preprocessing:
    ./preprocess c out.words enwik9 out.pre 16384 512
    LSTM large model:
    ./nncp -n_layer 5 -hidden_size 352 -n_symb 16388 -full_connect 1 -lr 6e-3 c out.pre out.bin

    From nncp.pdf: see 5.2 Subword based preprocessor
    From nncp-2019-04-13.tar.gz: see preprocess.c
    Maybe the version 2019/04/13 slightly improves the compression compared to what was reported in the first time?

    > 2. Does the number of threads affect the compression?
    I did a quick test (quad core, 2 hyperthreading) on ENWIK5:
    -T 1 bps=2.741 (34264 bytes), 2.23 kB/s
    -T 2 bps=2.741 (34264 bytes), 3.00 kB/s
    -T 4 bps=2.741 (34264 bytes), 3.58 kB/s
    -T 6 bps=2.741 (34264 bytes), 3.42 kB/s
    -T 8 bps=2.741 (34264 bytes), 3.34 kB/s
    Compressed files are identical for all -T.

    The NNCP decompression halts for all file size between 1 and 15 bytes, I already wrote to Fabrice, he said he'll fix it.

  29. Thanks:

    Shelwien (21st April 2019)

  30. #21
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    pdf says "For the small LSTM model, we reused the text preprocessor of CMIX".

    I used
    .\preprocess c _tmp_word.txt enwik8 out.bin 16384 64
    .\nncp -T 4 -n_layer 5 -hidden_size 352 -n_symb 16388 -full_connect 1 -lr 6e-3 c out.bin out-nncp.bin
    like specified in readme file
    and got 16,984,458, verified decoding too.

    .pdf says result should be "LSTM (large) 16,924,569".

  31. #22
    Member
    Join Date
    Sep 2015
    Location
    Italy
    Posts
    249
    Thanks
    108
    Thanked 146 Times in 107 Posts
    > pdf says "For the small LSTM model, we reused the text preprocessor of CMIX".
    Yes, for LSTM small they reused cmix, and PDF says the compressed size is 20500039.
    The compressed size 16924569 is for LSTM large (PDF table 3): PDF "5.2 Subword based preprocessor." says "The larger models use a preprocessor where each symbol represents a sequence of bytes. ..." and it's not the text preprocessor of CMIX (as far as I understood).
    However, I don't know how to exactly achieve 16924569, I didn't try it.

    Maybe -T doesn't hurt compression if the file size is small, but can it hurts on big file size?

  32. #23
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    962
    Thanks
    573
    Thanked 397 Times in 295 Posts
    I don't think if -T option could hurt compression on big files and doesn't for smaller - I've tested it for D.TGA which has 721'185 bytes and all scores were identical.
    In my opinion it could be a builds differences - similarly like for cmix -> almost every build have slightly different compression for different files.

    To improve compression ratio you could try to use "-batch_size" with lower values = 8, 6, 4 or even 2 which gives respectively for my whole testset 1.2%, 1.4%, 1.8% and about 2.2% (not yet finished) of gain. Of course with analogical decrease of speed. I don't know how could be enwik8/9 improvement, however some of my files (U.DOC, Q.WK3) got even more than 10% of gain to default version and also 4-10% of gain to LSTMC LARGE option.

  33. Thanks:

    Shelwien (21st April 2019)

  34. #24
    Member
    Join Date
    Apr 2019
    Location
    France
    Posts
    16
    Thanks
    0
    Thanked 24 Times in 14 Posts
    Sorry I just realized I did not report the correct parameters for this particular result. The enwik9 result should be OK though. In order to get a matching result, you need to slightly increase the hidden size:

    nncp -n_layer 5 -hidden_size 384 -n_symb 16388 -full_connect 1 -lr 6e-3 c out.bin out-nncp.bin

    I confirm that the "-T" option does not change the compressor output.

    It is possible to get slightly better results with a larger model such as:

    nncp -n_layer 5 -hidden_size 512 -n_symb 16388 -full_connect 1 -lr 6e-3 c out.bin out-nncp.bin

    -> 16,791,077 bytes

  35. Thanks (3):

    byronknoll (21st April 2019),Hakan Abbas (28th April 2019),Shelwien (21st April 2019)

  36. #25
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    1. Btw, DRT - the origin of cmix -s - (or more like its dictionary) has a special property.
    Words in dictionary are ordered in such a way that similar words (morphologically or semantically)
    end up also having same prefix and/or suffix bits.
    That is, for words that occur in same context it makes matches several bits longer.
    Does nncp preprocess have a similar effect?

    2. There're some scripts for enwik preprocessing: https://encode.su/threads/2590-Some-...ll=1#post59399
    They can remove some layers of markup in enwik - maybe that can help with dictionary generation?

  37. #26
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    962
    Thanks
    573
    Thanked 397 Times in 295 Posts
    @fab - I've tryied to use preprocessor with latest cmix (v17) dictionary but it looks something goes not well...
    Am I doing something wrong? Could the cmix dictionary be used with NNCP preprocessor?
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	nncp_prep.jpg 
Views:	39 
Size:	18.0 KB 
ID:	6585  

  38. #27
    Member
    Join Date
    Apr 2019
    Location
    France
    Posts
    16
    Thanks
    0
    Thanked 24 Times in 14 Posts
    If you want to use the CMIX dictionary then you need to use the CMIX preprocessor. The NNCP preprocessor is different because it produces an output with an alphabet of more than 256 symbols (first number on the "preprocess" command line) and because it builds its own dictionary based on the file content (so your command overwrites english.dic instead of using it !).

    In my tests, the NNCP preprocessor gave significantly better results than the CMIX preprocessor with the LSTM or Transformer models. It comes from the fact each word (or subword) is stored on a single symbol in the NNCP case. In the CMIX case, a careful tuning of the word list ordering and of the byte encoding are needed to achieve good performance. The downside of the NNCP preprocessor is that the compressor must accept large alphabets in order to use it efficiently (hence a byte based compressor won't give good results).

  39. Thanks (2):

    byronknoll (21st April 2019),Darek (21st April 2019)

  40. #28
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    @Darek: Nope.
    nncp preprocessor is used like this:
    preprocess c _tmp_word.txt enwik8 out.bin 16384 64
    where "out.bin" is the result of enwik8 preprocessing (with 2 bytes per symbol when alphabet size is 16384),
    and generated dictionary is written to _tmp_word.txt.
    16384 here is the target dictionary size... but the final dictionary size somehow can be different,
    in particular for enwik8 it ends up as 16387 (that's why -n_sym 16388 in nncp options).

    As to algorithm, I think it iteratively finds a string with most occurrences and assigns it to a new symbol.

    Most likely you can apply DRT / cmix -s first, then nncp preprocess.

  41. Thanks:

    Darek (21st April 2019)

  42. #29
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    962
    Thanks
    573
    Thanked 397 Times in 295 Posts
    Ok, I've typed the same command as you and there no preprocessing output file...
    I've tried it to enwik6 and enwik8 files and there is the same communicate: "/tmp/word1.txt: No such file or directory".
    The "_tmp_word_6.txt" file is generated - I think properly, but preprocessed file is gone - I've also tried with different names.

    Example of command:

    D:\Core\0000WORK.111\!00001compression\TESTSCORES\ NNCP>preprocess.exe c _tmp_word_6.txt Enw/enwik6 enwik6.nnc 16384 64
    input: 1000000 bytes
    after case/space preprocessing: 1107308 symbols
    822 461688 445950
    Number of words=822 Final length=461688
    /tmp/word1.txt: No such file or directory

  43. #30
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,419
    Thanks
    222
    Thanked 1,049 Times in 563 Posts
    Yes, you have to use my patched version attached here: https://encode.su/threads/3094-NNCP-...ll=1#post59896

  44. Thanks:

    Darek (21st April 2019)

Page 1 of 4 123 ... LastLast

Similar Threads

  1. Compression with recurrent neural networks
    By Matt Mahoney in forum Data Compression
    Replies: 6
    Last Post: 28th January 2019, 23:44
  2. The future of lossless data compression
    By inikep in forum Data Compression
    Replies: 66
    Last Post: 5th March 2018, 13:03
  3. Draco 3D data compression (lossless)
    By pothos2 in forum Data Compression
    Replies: 2
    Last Post: 27th January 2017, 01:27
  4. Replies: 5
    Last Post: 17th September 2015, 21:43
  5. lossless data compression
    By SLS in forum Data Compression
    Replies: 21
    Last Post: 15th March 2011, 12:35

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •