This article provides elements to answer the question: how to judge general stylistic color rendering choices made by imaging devices capable of recording HDR formats in an objective manner? The goal of our work is to build a framework to analyze color rendering behaviors in targeted regions of any scene, supporting both HDR and SDR content. To this end, we discuss modeling of camera behavior and visualization methods based on the IC T C P /ITP color spaces, alongside with example of lab as well as real scenes showcasing common issues and ambiguities in HDR rendering.
3D-LUTs are widely used in cinematography to map one gamut into another or to provide different moods to the images via artistic color transformations. Most of the time, these transformations are computed off-line and their sparse representations stored as 3D-LUTs into digital cameras or on-set devices. In this way, the director and the on-set crew can see a preview of the final results of the color processing while shooting. Unfortunately, these kind of devices have strong hardware constraints, so the 3D-LUTs shall be as small as possible, but always generating artefact-free images. While for the SDR viewing devices this condition is guaranteed by the dimension 33×33×33, for the new HDR and WCG displays much larger and not feasible 3DLUTs are needed to generate acceptable images. In this work, the uniform lattice constrain of the 3D-LUT has been removed. Therefore, the position of the vertices can be optimized by minimizing the color error introduced by the sparse representation. The proposed approach has shown to be very effective in reducing the color error for a given 3D-LUT size, or the size for a given error.