From the looks of it that calculation is the square error blended by a fixed weighted average of the RGB channels, some colors share luminance so that appears to be taking that into account. However this fixed weighting is tuned for our human eyes on natural image sets, some computer generated images will typically have different luminance correlations depending on how complex the shaders are on the system which computed it. Compare a screenshot of Super Mario 64 vs a photo of a real castle for example.
However you want to know "exactly" how much one image varies from another, we can track the difference on a linear scale instead of what you're currently using.
It's just a minor change:
Code:
int error = 0;
uint8_t *original, *compressed;
int getErr(uint8_t *original, uint8_t *compressed);
for (int i=0; i<16; ++i) // iterate all pixels in 4x4 block
error += getErr(original+i*3, compressed+i*3);
int getErr(uint8_t *o, uint8_t *c)
{
int dr = o[0] - c[0];
int dg = o[1] - c[1];
int db = o[2] - c[2];
return = dr + dg + db; // error over all color channels, no bias
}
The lower the linear difference the easier it will be for an algorithm to compress since it inherently has lower complexity. Hopefully this helps.