Cameras as well as displays of mobile phones, autonomously driven vehicles, PC monitors, and TVs continue to increase their native resolution to 4k by 2k and beyond. At the same time, their high dynamic range formats demand higher bit depth for the underlying color component signals. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also across on-chip image processing pipelines that access external memory units. In 2016 we introduced a low cost, real time, visually lossless color image compression concept inspired by structure tensor analysis which promises a highly adaptive and robust compression performance across a substantial range of compression ratios (between 1x and 3x) without significantly compromising perceptual image quality. We also noticed surprisingly strong perceptual color stability in spite of having processed each color component independently in RGB color space. To manage a wider range of compression ratios as well as visually lossless image quality, we proposed a novel approach that converts image amplitudes into a pair of discrete structure and magnitude quantities on a pixel-by-pixel basis which had been inspired by structure tensor analysis. Graceful degradation of image information is controlled by a single parameter which aims at optimally defining sparsity – as a function of image content. Furthermore, we applied error diffusion via a threshold matrix to optimally diffuse the residual coding error. Strongly encouraged by these findings, we continued implementing a version which combines structurally similar elements across RGB color components. As a result, we already achieve visually lossless compression with compression ratios above 4x with 8bit gamma pre-corrected color component signals while having to only analyze 4 nearest neighbors per pixel. We believe to have well identified a conceptual explanation for the algorithm's extraordinary perceptual color stability which we would like to present and discuss in detail. We also provide a detailed error distribution analysis across a variety of well-known, full-reference metrics which highlights the effectiveness of our new approach, identifies its current limitations with regard to high quality color rendering, and illustrates algorithm specific visual artifacts.