I have recently mentioned a possibility of fractal Haar wavelets in a different thread and finally decided to implement it: https://github.com/JarekDuda/FractalWavelets
While Fourier transform requires at least O(n*log(n)) time, the cost of this one grows in a linear way, so we can cheaply use as large block as we want here.
Another advantage is that boundaries between regions (which are fractals - "fraxels"?) are more complex and so blocking artifacts should be less visually disturbing.
Also, a block has 6 neighbors here, so maybe we could use correlations in a better way.
Here are examples of such blocking (values -> Haar coefficients, then trim high frequencies: set them to zero, then coefficients -> values. The rest of picture has remained unchanged):
Where "center" is the starting point, the region has 2^depth pixels, v[] is the chosen set of shifts, "*coef" is hash-like array of coefficients.
Zeroth coefficient is the sum of all values, the other are differences between average values of two parts of a block (in hash-like tree) - each coefficient distributes the sum between both sub-blocks.
Making a lossy image compression is a long way:
- cover all positions (blocks can have various sizes down to 1 pixel, the dimensions of picture can determine the block structure),
- choose a quantizer,
- maybe add some intra-prediction, use correlations,
- add some entropy coder.
What do you think of such transform for lossy image compression?
What other interesting set of shift (v[]) can we use?
Have you heard of similar approaches?
Update: I have just found a 15 year old paper on this subject: http://www.fondationricard.com/docum...Pdelft99_6.pdf
The author was probably not aware or the tame twindragon case, which seems the most appropriate here (?)
Last edited by Jarek; 11th September 2014 at 20:47.
What do you think of such transform for lossy image compression? What other interesting set of shift (v[]) can we use? Have you heard of similar approaches?
I think you should probably look into the research of Antonio Ortega. This looks like a special case of "wavelets on graphs". Here's the talk from ICIP 2013: http://sipi.usc.edu/~ortega/Papers/G...013-Ortega.pdf Here is another reference you may find interesting: Critically sampled graph-based wavelet transforms for image coding Narang, S.K. ; Yung-Hsuan Chao ; Ortega, A. Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2013 Asia-Pacific DOI: 10.1109/APSIPA.2013.6694319 Publication Year: 2013 , Page(s): 1 - 4 HTHH, Thomas
Thanks Thomas. I was working on the eigenspectrum of graphs for my last phd from the perspective of MERW (maximal entropy random walk, simulator) - where mainly the dominant eigenvector was interesting as the other coordinates were vanishing exponentially (thermalization):
Indeed while Fourier transform can be seen as going to the eigenspectrum of graph being a regular lattice, as in the sources you have provided, we can analogously define transforms for more general graphs - e.g. depending on edges of picture.
However, they are still using standard box-based lattices.
Using e.g. shifts v[] from complex base numeral systems like here instead (e.g. as additional edges), we could generalize their approach to nicer fractal-like blocks.
From the other side, this graph eigenspectrum approach could allow to generalize presented Haar wavelets into some more sophisticated ones.
ps. We could also made some hybrid fravelet-fourier system: e.g. divide picture into such regular lattice of fractal-like regions, then take e.g. 8x8 block of such regions and perform e.g. discrete cosine transform on their average values (zeroth coefficient). Thanks of it, boundary artifact should be less visually disturbing.