Ok, here is a funny thing I have been thinking about (the "worth 1B$" is a joke guys!).
We know that it is not possible to compress 2 bits into 1 or zero (!) bits in a static memory system, it does not make sense. But we can do it when transferring data over the network, with a loss-less compression algorithm![]()
Here is the trick.
First you have to remember that any common network transfer over TCP/IP happens after that the systems made a connection opening, and ends with a connection closure. This will be restated later on.
Now, if you want to send 2 bits to your receiver, by sending just 1 bit or zero bit over the network, you can 'encode' the extra bit into another dimension of information, which is... time.
Let's assume that sender and receiver both know the algorithm. They can agree on the following, where t is a specific 'blank' of time, aka a lag in the connection. An example of t: if your usual latency with your receiver is 30ms, you can set t to be 60ms.
Step 1: connection opens
Step 2: send the bit (according to the following example schema)
00 => 0 + t (1 bit is sent)
01 => t + 1 (1 bit is sent)
10 => 1 + t (1 bit is sent)
11 => t + t (0 bit is sent - t+0 could have worked here also, 1 bit is sent in such case)
Step 3: connection closes.
By analyzing time between events, the receiver can now properly rebuild the 2 bits of information with only 1 bit or zero bits that have been consumed in the network statistics!
Well it's not really compression, because you consume 'something' (time) to save something else (data), so this does not break the pigeonhole principle. However, from a pure bandwidth perspective, it does save something
Let's go a bit farther.
We could imagine a system with hyper-precise clocks. For example, quantum clocks able to proceed billions or more of ticks per seconds. You could transfer large numbers by starting the receiving clock and measuring the numbers of ticks elapsed after that a very short impulse has been received (sent by emitter).
It's like if you have a computer computing all the possible numbers, and have the sender telling the receiving computer when to stops. The last number the receiving computer has computed is the data that was meant to be transferred.