This paper is the continuation of a previous work, which aimed to develop a color rendering model using ICtCp color space, to evaluate SDR and HDR-encoded content. However, the model was only tested on an SDR image dataset. The focus of this paper is to provide an analysis of a new HDR dataset of laboratory scenes images using our model and additional color rendering visualization tools. The new HDR dataset, captured with different devices and formats in controlled laboratory setups, allows the estimation of HDR performances, encompassing several key aspects including color accuracy, contrast, and displayed brightness level, in a variety of lighting scenarios. The study provides valuable insights into the color reproduction capabilities of modern imaging devices, highlighting the advantages of HDR imaging compared to SDR and the impact of different HDR formats on visual quality.
3D-LUTs are widely used in cinematography to map one gamut into another or to provide different moods to the images via artistic color transformations. Most of the time, these transformations are computed off-line and their sparse representations stored as 3D-LUTs into digital cameras or on-set devices. In this way, the director and the on-set crew can see a preview of the final results of the color processing while shooting. Unfortunately, these kind of devices have strong hardware constraints, so the 3D-LUTs shall be as small as possible, but always generating artefact-free images. While for the SDR viewing devices this condition is guaranteed by the dimension 33×33×33, for the new HDR and WCG displays much larger and not feasible 3DLUTs are needed to generate acceptable images. In this work, the uniform lattice constrain of the 3D-LUT has been removed. Therefore, the position of the vertices can be optimized by minimizing the color error introduced by the sparse representation. The proposed approach has shown to be very effective in reducing the color error for a given 3D-LUT size, or the size for a given error.