

It is impossible to recover the actual reflectance that induces a given colour response: as many spectra - called metamers - will integrate to the same response values. For some applications it suffices to recover a good single metamer (satisfying a criterion such that it is the smoothest amongst all metamers). However, when the same surface is viewed under different lights - generating different RGBs - the corresponding reflectances recovered by Smoothest Reflectance estimation (SR) are not all the same. Indeed, there can be a large spectral variation. Recent work has demonstrated that more stable - illuminant insensitive - metamers can be produced by Colour Corrected Smoothest Reflectance estimation (CCSR): where camera RGBs are colour corrected to a canonical reflectance light with respect to which metamers are recovered. In this paper, we examine the relationship between the spectral sensitivities of the camera and both SR and CCSR metamer recovery. Empirically, the variation in recovered metamers for the worst camera for the SR method is found to be 2.5 times larger than the best camera using CCSR. We argue that the stability of metamer recovery in general (for either SR or CCSR) is linked to the extent that accurate colour correction is possible.

The measurement of diffuse skin reflectance spectrum has important applications, but require accurate and fast measurements. In this study, we proposed several methods for reconstructing the diffuse skin reflectance spectrum using several existing datasets. These methods can reconstruct the spectrum of an area by only capturing the images under several LED illuminations, instead of using comprehensive systems. A comprehensive system is being built to collect a ground-truth dataset, and also used to test the performance of the proposed methods.

This study discusses how scenes and model’s appearance, gender and makeup status influence the preference facial skin tone of Chinese models for Chinese observers. This research also explores how the models’ preference differ from the stranger observers. The results show that the makeup status can affect the scope of the preference area. The preference centers exhibit a trend of higher lightness and smaller hue angles than original ones. In all scenes except indoor scenes, a higher CCT correlates with a greater inclination angle of the ellipsoid projection on the a*-b* plane, a narrower ellipsoid range, and a smaller hue angle of the ellipsoid center. Apart from the in-lab scenario, higher scene color temperatures are associated with a larger chroma increment and a smaller hue increment in the preference centers relative to the original image skin tones. Also, the chroma of the preference centers of indoor and in-lab scenes is lower than that of the original images, distinguishing them from other scenes. The models’ preference is influenced by their actual skin color to a larger extent.

Skin tone reproduction has long been a challenge in image processing due to illumination by multiple sources in real‐world conditions. This paper describes an algorithm to achieve preferred skin tone reproduction. The work comprises two pivotal components including to develop: 1) a CCT-SPQ/D optimization model via controlled experiments to reveal the mapping relationships between correlated color temperature (CCT) and skin preference quality (SPQ) and chromatic adaptation degree (D), and 2) a novel white balance correction algorithm for skin regions under mixed illumination, which integrates local processing and spatial filtering with color temperature adaptive enhancement via the aforementioned model. Finally, a preference assessment experiment was conducted to demonstrate the superiority of the algorithm proposed.

With the rapid advancement of mobile imaging and the increasing demand for perceptually accurate white balance (WB) algorithms, the need for a comprehensive dataset providing perceptual acceptability assessments across diverse illumination conditions has arisen. To address this gap, we constructed the Multiple Illumination Scenarios (MIS) dataset, which spans both pure colors and complex objects under single and multiple illuminant conditions. Observer-based acceptability ratings were collected and analyzed across 3,465 trials, revealing heightened sensitivity to chromatic deviations in regions of low lightness and chroma. Additionally, spatial and illuminance factors were found to modulate color acceptability judgments in multi-illuminant scenarios. Based on these findings, we proposed two new metrics to improve the performance of current color difference models: one weighted by color appearance attributes and another that incorporates spatial and illuminance factors. Evaluation results demonstrated that our proposed metrics showed improved correlation with perceptual judgments across all tested color difference models. By incorporating more realistic datasets and integrating alternative WB error evaluation metrics, we aim to advance research into the prediction of WB error acceptability under complex lighting environments.

This article presents an enhanced mathematical framework that builds on the existing XCR model to more fully account for the increase in perceive display brightness as saturation rises. By integrating modifications to the CIECAM16, our toolkit allows for intuitive graphical exploration of color appearance attributes across a display’s full gamut.

To perform colour rendition in digital images for different atmosphere styles is becoming important for effectively communicating visual information. While different camera brands often possess their unique feature styles, precisely reproducing and evaluating colour style effects across diverse camera systems remains significant challenge such as fast mapping and effective evaluation between the source and target styles. To address this issue, a workflow of colour style transfer has been developed. To begin with, multi-device image database including many images was built, comprising 1550 sRGB images, for which each includes two colour charts for calibration, and proposed transfer method using a 3DLUT precisely transfers colour styles, achieving a remarkably low ΔE00 of 1.09 in colour charts tests. Subjective evaluation with 10 volunteers showed a perceptible small visual difference, indicating the effect of workflow achieved satisfactory performance. To overcome the limitations of subjective testing, a Siamese network-based EfficientNet Visual Difference Evaluation Model (EVDM) was introduced, which utilized a lightweight EfficientNet, achieved Pearson correlation coefficients of 0.90 (training), 0.88 (validation), and 0.92 (overall), significantly outperforming sophisticated baseline methods based on CIEDE2000 (max 0.76). This demonstrates EVDM's superior fitting, generalization, and consistency with human perception.

The ubiquitous use of mobile devices has underscored the importance of evaluating display visual comfort to reduce eye strain and fatigue. This developed and tested visual comfort model for display visual comfort, taking into account key factors such as ambient illuminance, display luminance, text-background luminance contrast. The 𝑉𝐶𝐴𝐿𝐿 model was built based on three psychophysical experimental datasets: Neutral Colour Combination (NC), Coloured Combination (CC), and Neutral Colour Combination with Dim Ambient Light (NCD), which involved 103 participants spanning various age groups. Two new experiments were conducted to verify the model’ performance. One is to use the model to access displays’ visual comfort, using an LCD and a QD_mini-LED, comparing with the results from various other testing methods. The other experiment was conducted to verify the performance of 𝑉𝐶𝑟𝑒𝑣𝑒𝑟𝑠𝑒 by computing optimal text-background luminance combinations for any display under varying ambient light.

We present a novel anisotropic diffusion algorithm for noise reduction in Magnetic Resonance Imaging (MRI). The method integrates two key concepts: (1) diffusion is explicitly constrained to avoid increases in local image gradients, thereby preserving edges and fine structural details; and (2) a sequence of filters with exponentially increasing radii is applied, each maintaining a fixed number of non-zero coefficients. These filters allow the algorithm to evaluate whether pixels distant from the target location can contribute to smoothing without degrading local gradients. As a result, the method aims to balance between preserving local details and averaging global similarities. In contrast to traditional denoising techniques based on local filtering or total variation minimization, the proposed algorithm enables controlled non-local diffusion and naturally extends to three-dimensional voxel arrays, making it well-suited for volumetric MRI data. The framework also permits the integration of additional geometric constraints, such as curvature, further enhancing its ability to preserve anatomical structures and surfaces. The effectiveness of the proposed method is demonstrated on real MRI data from a macaque monkey. The experimental results indicate PSNR values comparable to those of our previous approach, while providing substantially better suppression of low-frequency noise, absence of visible artifacts, and faithful preservation of critical image features.