> I'd not say it like that, but basically durilca is
> seg_file+disasm32+ppmonstr.
> So its not simplified or anything.
Well, there was a misunderstood
I was only talk about durilca's preprocessor part. I mean seg_file seems a simplified version of durilca's preprocessor. So, we are talking about exactly same thing 
> Well, I uploaded some older durilca version which is better for
> such tests.
> http://shelwien.googlepages.com/durilca2_002a.rar
> Its the last version which still had the hidden "-l" switch for
> segmentation results dumping. (Usage: durilca e -t1 -l ccc.tar)
I think, Shkarin don't like you
Because, most of his works become public by you. Thanks a lot anyway. You and Shkarin are good guys for compression world. Thanks a lot for your works.
Let's see the results with durilca's preprocessor+my ugly rolz compressor 
Code:
BIT 0.2 (x64 version)
- Calgary Corpus (16 segments)
851,943 bytes (61.216 seconds)
- Q3DM11.BSP (12 segments)
1,786,701 bytes (452.277 seconds)
- Valley.cmb (3 segments)
9.277.375 bytes (1408.269 seconds)
- Valley.cmb (Direct Processing)
9,272,426 bytes (I was really bored while compressing at 12 Kb/sec :D)
I've noticed, I didn't give you information about test file size. Here is the tested file sizes:
Code:
Calgary Corpus: 3,152,896 bytes
q3dm11.bsp: 8,490,236 bytes
Valley.cmb: 19,776,230 bytes
By having these results, I believe generic segmentation doesn't help my compressor. It seems I must write several specific models for specific files 
I have some idea for my new compression schema. I'll make these compressors:
- ROLZ+CM: My current implementation is only a draft version of it. I'm polishing it
It will be used for generic types of data.
- BWT+State Map+ARI: Matt's works seems great for text compression. Thanks again.
- S+P transform with VLI on semi-programmable structure: I believe this will rock most of benchmark!
One simple codec for images and audio files - even do 15 bits image structures!