So, the compression algorithm I've been working on doesn't require ANY repetition to be effective. Currently, it's crunching runs of 1,024 bytes down to a 20-byte representation, and that's really squeezing everything out of it. I could stand to raise the representation up to even 512 bytes and still be getting great performance out of it. On a 1GB "random data" file, it compressed at 549 Mbps in 18.7 seconds at 98.24% compression. The compressed file is ~18.5kb now. The decompressor is already coded, but I am working out bugs. I lose a few bytes on decompression, so it's close, but not 100%.
I will say in advance, I understand any doubt. I think it's healthy to be skeptical, especially considering this would break some rules of compression.
All this to ask, are there any current compression algorithms that don't rely on repetition or patterns?