I have recently mentioned a possibility of fractal Haar wavelets in a different thread and finally decided to implement it: https://github.com/JarekDuda/FractalWavelets

While Fourier transform requires at least O(n*log(n)) time, the cost of this one grows in a linear way, so we can cheaply use as large block as we want here.

Another advantage is that boundaries between regions (which are fractals - "fraxels"?) are more complex and so blocking artifacts should be less visually disturbing.

Also, a block has 6 neighbors here, so maybe we could use correlations in a better way.

Here are examples of such blocking (values -> Haar coefficients, then trim high frequencies: set them to zero, then coefficients -> values. The rest of picture has remained unchanged):

The transform loop is a simple recurrence:

void find_coef(){ // values -> coefficients

int lt=fn_cf(center, 2, depth-2), rt=fn_cf(center + v[depth-1], 3, depth-2);

*(coef+1)= rt-lt; *coef = rt + lt;

}

int fn_cf(coord cn, int ps, int dp){ // recurrence

int lt, rt;

if(dp){lt=fn_cf(cn, ps<<1, dp-1); rt=fn_cf(cn + v[dp], (ps<<1)+1, dp-1);}

else {lt=getpixel(cn.x, cn.y); rt=getpixel(cn.x + v[0].x, cn.y + v[0].y);}

*(coef+ps) = rt-lt; return rt+lt;

}

void find_val(){fn_vl(*coef, 1, center, depth-1); } // coefficients -> values

void fn_vl(int sum, int ps, coord cn, int dp){ // recurrence

int dif = *(coef+ps), lt = (sum - dif)>>1, rt = (sum + dif)>>1;

if(dp){fn_vl(lt, ps<<1, cn, dp-1); fn_vl(rt, (ps<<1)+1, cn+v[dp], dp-1);}

else {setpixel(cn.x,cn.y,lt); setpixel(cn.x + v[0].x, cn.y + v[0].y,rt);}

}

Where "center" is the starting point, the region has 2^depth pixels, v[] is the chosen set of shifts, "*coef" is hash-like array of coefficients.

Zeroth coefficient is the sum of all values, the other are differences between average values of two parts of a block (in hash-like tree) - each coefficient distributes the sum between both sub-blocks.

Making a lossy image compression is a long way:

- cover all positions (blocks can have various sizes down to 1 pixel, the dimensions of picture can determine the block structure),

- choose a quantizer,

- maybe add some intra-prediction, use correlations,

- add some entropy coder.

What do you think of such transform for lossy image compression?

What other interesting set of shift (v[]) can we use?

Have you heard of similar approaches?

Update: I have just found a 15 year old paper on this subject: http://www.fondationricard.com/docum...Pdelft99_6.pdf

The author was probably not aware or the tame twindragon case, which seems the most appropriate here (?)