Material appearance is traditionally represented through its Bidirectional Reflectance Distribution Function (BRDF), quantifying how incident light is scattered from a surface over the hemisphere. To speed up the measurement process of the BRDF for a given material, which can necessitate millions of measurement directions, image-based setups are often used for their ability to parallelize the acquisition process: each pixel of the camera gives one unique configuration of measurement. With highly specular materials, the High Dynamic Range (HDR) imaging techniques are used to acquire the whole BRDF dynamic range, which can reach more than 10 orders of magnitude. Unfortunately, HDR can introduce star-burst patterns around highlights arising from the diffraction by the camera aperture. Therefore, while trying to keep track on uncertainties throughout the measurement process, one has to be careful to include this underlying diffraction convolution kernel. A purposely developed algorithm is used to remove most part of the pixels polluted by diffraction, which increase the measurement quality of specular materials, at the cost of discarding an important amount of BRDF configurations (up to 90% with specular materials). Finally, our setup succeed to reach a 1.5° median accuracy (considering all the possible geometrical configurations), with a repeatability from 1.6% for the most diffuse materials to 5.5% for the most specular ones. Our new database, with their quantified uncertainties, will be helpful for comparing the quality and accuracy of the different experimental setups and for designing new image-based BRDF measurement devices.
Hyperspectral imaging is an emerging non-invasive method for the optical characterization of human skin, allowing detailed surface measurement over a large area. By providing the spectral reflectance in each pixel, it enables not only color simulation under various lighting conditions, but also the estimation of skin structure and composition. These parameters, which can be correlated to a person's health, are deduced from the spectral reflectance in each pixel thanks to optical models inversion. Such techniques are already available in 2D images for flat skin areas, but extending them to 3D is crucial to address large scale and complex shapes as in the human face. The requirements for accurate acquisition are a short acquisition time for in vivo applications and uniform lighting conditions to avoid shadowing. The proposed method combines wide field hyperspectral imaging and 3D surface acquisition, using a single camera, with an acquisition time of less than 5 seconds. Complete color consistency can be achieved by computationally correcting irradiance non-uniformities using 3D shape information.
The rendering of a same printed image can change drastically considering the large number of different types of print support (paper, metallic panel, textile, plastic, etc.) and different types of inks (dye based, pigment based, etc.). Predicting the visual rendering of inks printed on any support by characterizing separately the spectral properties of the inks and those of the print support has been for a while an objective for the printing community. In this paper, we propose a multiscale solution to this issue which combines optical models and measurements. On the one side, we predict the reflectance and transmittance of ink layers alone (without support) by using a radiative transfer four-flux model based on the microscopic characteristics of the inks. On the other side, the reflectance and transmittance of the print support are obtained directly through macroscopic measurements. Finally, through the four-flux matrix model, we compute the joint reflectance and transmittance of the superposition of the stack of inks on the support. Initial results show that the proposed approach is suitable for the prediction of image rendering on different combinations of ink and print support.
The different areas of a concave object illuminate each other by a multiple light reflection process, called interreflections, depending on the geometries of the object and the lighting. For an accurate prediction of the radiance perceived from each point of the object by an observer or a camera, an interreflection model is necessary, taking into account the optical properties and the shape of the object, the orientation(s) of the incident light which can produce shadows, and the infinite number of light bounces between the different points of the object. The present paper focusses on the irradiance of two adjacent planar panels (V-cavity) illuminated by collimated light from any direction of the hemisphere, or by diffuse light. According to the reflectance of the material and the angle of the cavity, the loss of irradiance near the fold due to the shadowing effect is partly compensated by the gain in radiance due to the interreflections.
Interactive RGB Transparency is an open source tool dedicated to the visualization of transparency effects in digital color images. Transparency effect can be rendered as a binary color mixing between foreground and background layers, which can be uniformly colored layers or downloaded images, while the transparency rate can be modified interactively from perfectly transparent to perfectly opaque. The tool also allows the inverse operation, by removing transparency effects in an image. While most common softwares render transparency effects by additive color mixing (eventually subtractive color mixing), Interactive RGB Transparency proposes three different approaches: i) defining new color mixing laws varying continuously from additive to subtractive color mixing, ii) defining color mixing as a generalized average between the colors of the layers, iii) considering a translucent scattering layer whose thickness can be modified.
The present paper proposes a generalized method to estimate the bispectral Donaldson matrices of fluorescent objects. We suppose that the matte surface of a fluorescent object is illuminated by each of light sources with different spectral-power distributions, and is observed by a spectral imaging system in a visible wavelength range. The Donaldson matrix is decomposed into three spectral functions of reflection, fluorescent excitation and fluorescent emission. We segment the visible wavelength into two ranges having (1) only reflection without luminescence and (2) both reflection and fluorescent emission. An iterative algorithm is presented to effectively estimate the three spectral functions on the residual minimization. The wavelength range of fluorescent emission is also estimated. The proposed method is reliable in the sense that the estimates are determined to minimize the average residual error to the observations. The feasibility of the method is shown in experiments using two fluorescent samples and four illuminants.
In this work we present a psychometric, visual search-based study analyzing the perceptual appearance uniformity of 3D printed materials. A 3D printer's quality, precision, and capacity to produce smooth surfaces directly affects the perceived uniformity of its outputs. This work represents the first steps towards building a perceptual model of uniformity for 3D printing. Such a model will greatly assist in advancing the quality of 3D printers, especially as they become capable of creating complex, spatiallyvarying appearances. We demonstrate the effectiveness of applying visual search to appearance perception problems by analyzing 288 appearance variations formed from the combination of 18 printed surfaces, 8 virtual transformations of those surfaces, and two illumination conditions. The virtual transformations allowed us to explore the impact of bumpiness, glossiness, and spatially-varying color on perceived uniformity. Significant effects were found to be caused by several of these dimensions. Additionally, the measured psychophysical data is a valuable contribution to the general study of the perception of spatially-varying appearances.
Potential methods for predicting the visual impression of the opacity of translucent white ink were evaluated. An experiment was conducted to collect visual data for white ink coatings on three different substrates: white paper, clear film and kraft paper. The contrast ratio metric used for paint and varnish coatings, together with other proposed methods, were tested as correlates for the visual data. The contrast ratio, based on the difference in reflectance between ink printed on the substrate and over a black ground, performed most consistently across the three substrates, while other correlates, such as relative lightness or colour difference, performed better for ink on certain substrates. The results also indicate that a method that does not use black ground under the coating may provide better results when the substrate differs significantly in appearance from the ink.