# Thread: Data Compression Diamond Algorithm

1. ## Data Compression Diamond Algorithm

Don't worry, I'm in the process of developing a new algorithm. Something like 40% Theory wins and the rest can be completed quickly.An MP4 file can be minimized from 40% to 95% and can be resized ,The advantage is that Similarly, you can compress the file again and again.

2. No.

3. ## Data Compression Diamond Algorithm

Data can be compressed again and again which is depending on the data algorithm. used Diamond Data Compression Algorithm Can Be Compress Any (compressed or not)Data up to 40% to 99% .
Everyone is welcome to comment on this

4. No.

5. ## Thanks:

snowcat (31st March 2020)

6. Do you mean by no, you can't do that?
Do you mean by no, you can't do that?Do you mean by no, you can't do that?

7. Yes.

8. Of course it can, since the string TL can be found from TL, type T = type, L = length N = TL I think I can , the proof after in a few months.

9. Originally Posted by uhost
Of course it can, since the string TL can be found from TL, type T = type, L = length N = TL I think I can , the proof after in a few months.
You are not he first one to come here and not understanding simple math and/or compression theory and think you have the holy grail.
But hey let us see the wonder when you have a working compression and decompressor

10. ## Thanks:

schnaader (3rd March 2020)

11. Thanks I appreciate your valuable comment ,But just because everyone is not on the same path does not mean I can or will not .I hope I can, though, at the end of this year.To say a thing success or not is a matter of intellectual thought

And what I said in Diamond Algorithm is actually the shape of the numbers when the string is close. The maximum length of a word is half the card.

12. Kolmogorov complexity is a proven fact about the compressability of a piece of information. There is a minimum amount of information necessary in order to be able to restore the compressed information. Sometimes the lower bound of the complexity is a short formula or definition about the data. For example the description "the first million digits of pi" is much shorter than the actual first million digits. In case of real random data the Kolmogorov complexity is equal to the size of the random data.

Therefor your statement that every piece of information can be compressed by half the length and that it can be repeated, must be false. It contradicts the known facts surrounding Kolmogorov complexity.

13. The system I'm developing has completely eliminated decimals.
Each word has a hash-number it is repeated and different position taken.
This position is less than string length.that is maximum length from input as 10% to 95% .it can compressed again and again

i have tested ,is that 32 bit can made as 24 bit .when input bit can increase more compression is taken 64 bit change to 34 bit ....

14. Originally Posted by uhost
it can compressed again and again
only with recompression...

15. Originally Posted by uhost
The system I'm developing has completely eliminated decimals.
Each word has a hash-number it is repeated and different position taken.
This position is less than string length.that is maximum length from input as 10% to 95% .it can compressed again and again

i have tested ,is that 32 bit can made as 24 bit .when input bit can increase more compression is taken 64 bit change to 34 bit ....
If you use a 64 bit or 32 bit hash function as the base of your compression algorithm... Oh boy... How do I explain this...

Lets take a simple example: the well known CRC-32. It takes any N-byte input and hashes it to 32 bits. If you hash 64 bit to 32 bit using CRC you go from 2^64 inputs to 2^32 outputs. After 'compression' for every 32 bit output you have 2^32 options of 64 bit input that leads to the same hash. If I read you correctly you accounted for hash collisions, using a diamond shaped collections of hash values that should ensure the correctness of reverting to 64 bits again. But for every N discarded bits you generate 2^N of hash revert options you don't account for. And that's if we are sure that CRC-32 is a 'perfect' hash function. CRC-32 is not perfect and because of that the loss of N discarded bits is even a bit higher than 2^N.

16. ## Thanks:

uhost (4th March 2020)

17. :

18. Can you explain your algorithm is it like this:
A=b-c
Y=A/thebigest

19. yes sure but not this time .... i am build up my dream. i know 1% fail make 99% fail .so i cannot reveal that When i finished..

20. Originally Posted by uhost
i am build up my dream.
Dreaming is often high on entropy and very very lossy.
Good luck!

21. ## Thanks (2):

snowcat (31st March 2020),uhost (6th March 2020)

22. Thanks

23. Originally Posted by pacalovasjurijus
Can you explain your algorithm is it like this:
A=b-c
Y=A/thebigest
position value profit
===== ==== =============
1) [1] 100000 5bit
2) [10] 100001 4bt
3) [11] 100010 4bit
4) [100] 100011 3bit
5) [101] 100100 3bit
6) [110] 100101 3bit
7) [111] 100110 3bit
[1000] 100111 2bit
9) [1001] 101000 2bit

this is example but not same. i think this idea can make compress data

24. Originally Posted by uhost
position value profit
===== ==== =============
1) [1] 100000 5bit
2) [10] 100001 4bt
3) [11] 100010 4bit
4) [100] 100011 3bit
5) [101] 100100 3bit
6) [110] 100101 3bit
7) [111] 100110 3bit
[1000] 100111 2bit
9) [1001] 101000 2bit
Won't work. Try to read and understand https://en.wikipedia.org/wiki/Prefix_code

Using prefix codes, a message can be transmitted as a sequence of concatenated code words, without any out-of-band markers or (alternatively) special markers between words to frame the words in the message. The recipient can decode the message unambiguously, by repeatedly finding and removing sequences that form valid code words. This is not generally possible with codes that lack the prefix property, for example {0, 1, 10, 11}: a receiver reading a "1" at the start of a code word would not know whether that was the complete code word "1", or merely the prefix of the code word "10" or "11"; so the string "10" could be interpreted either as a single codeword or as the concatenation of the words "1" then "0".

25. ## Thanks:

uhost (13th March 2020)

26. yes i know that but i had overcome that problem

i can completely retrieve origin binary code from compressed code but i have some problem faces in compressed algorithm that is some position have exceeded max bit 3 from origin code

27. OK genius where is binary to test?

28. Originally Posted by uhost
yes i know that but i had overcome that problem

i can completely retrieve origin binary code from compressed code but i have some problem faces in compressed algorithm that is some position have exceeded max bit 3 from origin code
So it doesn't work?

29. Originally Posted by uhost
Of course it can, since the string TL can be found from TL, type T = type, L = length N = TL I think I can , the proof after in a few months.
Is your algorithm somehow related to the golden ratio?
I have an algorithm that can theoretically compress any input stream. And this algorithm has some relation with the so-called god number (1,618...).
https://en.wikipedia.org/wiki/Golden_ratio

But, this requires verification. So far, everything is only in the form of equations.
I got these equations about five years ago, and tried to better understand their properties.

PS.
I write through an online translator, so my text may not look very correct.

30. hi, i am followed mixed mod [Dictionary & Math] with My own calculation i had Complete 50% but now still some problem. i am trying to fix it [some compressed data exceeded 3 bit]

and i can found all positive values equivalent to another positive value like as -2 +2
[23=50 in 3Bit calculation ]<= when i finished[100%] my theorem i will explain

31. when i would complete The theory part, i will send sample program for your advice and testing

32. you are Brilliant ! So i say you Fibonacci numbers related to my trick

33. Originally Posted by uhost
you are Brilliant ! So i say you Fibonacci numbers related to my trick
Although the ratio F[n+1]/F[n] and tends to the golden ratio, with n->. (Where F[n] is the fibonacci numbers).
But, my algorithm has some relationship with the golden ratio, not with the Fibonacci numbers.
I will also try to implement my algorithm.
So, the competition?

Where are you from ? Besides that Earth is the third planet from the sun? ))

PS: I write through an online translator, so my text may not look very correct.

34. Originally Posted by Romul
Although the ratio F[n+1]/F[n] and tends to the golden ratio, with n->. (Where F[n] is the fibonacci numbers).
But, my algorithm has some relationship with the golden ratio, not with the Fibonacci numbers.
I will also try to implement my algorithm.
So, the competition?

Where are you from ? Besides that Earth is the third planet from the sun? ))

PS: I write through an online translator, so my text may not look very correct.
is your algorithm implemented practical level ?[in compression]

Page 1 of 2 12 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•