Hi, 1st time poster here.
I am interested in whether noisy channel is ever a consideration in the creation of compression algorithms, and if so then in what applications?
For example this recent paper at ICML considers a machine learning based compressor for the task of compressing data which will be stored in a noisy format - a binary symmetric channel (BSD).
Obviously a BSD is not that realistic, but they show that learning a compressor to handle both the compression and such a BSD is maybe better than using some standard compression like WebP and then LDPC (or some other error correcting code).
I doubt their method is that practical (neural nets have a high computational footprint), but I thought it was interesting nonetheless.