Appearance is a complex psychovisual phenomenon impacted by various objective and subjective factors that are not yet fully understood. In this work we use real objects and unconstrained conditions to study appearance perception in human subjects, allowing free interaction between objects and observers. Human observers were asked to describe resin objects from an artwork collection and to complete two visual tasks of appearance-based clustering and ordering. The process was filmed for subsequent analysis with the consent of the observers. While clustering task helps us identify attributes people use to assess appearance similarity and difference, the ordering task is used to identify potential cues to create an appearance ordering system. Finally, we generate research hypotheses about how people perceive appearance and outline future studies to validate them. Preliminary observations revealed interesting cross-individual consistency in appearance assessment, while personal background of the observer might be affecting deviation from the general appearance assessment trends. On the other hand, no appearance ordering system stood out from the rest that might be explained with the sparse sampling of our dataset.
Translucency is a visual property attributed to objects that light may cross without transmitting a clear image of the scene which is behind. In absence of a more precise definition, this perceptual attribute is often considered as an intermediate between transparency, which is the property of objects that light may cross by transmitting a clear image of the scene behind, and opacity, which is the property of blocking the transmission of light and therefore masking completely the scene behind. If it is rather clear that translucency is closely related to light scattering, it is difficult to classify the translucent appearance according to one scale only, due to the different types of scattering, which can occur as well as the role of absorbance and thickness of the material. Through synthetic images rendered by optical models, we show that surface scattering, volume (or subsurface) scattering, possibly mixed with selective absorption, produce different types of translucency effects and different intermediates between transparency and opacity. We thus propose to represent translucency according to three axes related to these three optical phenomena: surface scattering, volume scattering, and absorption.
In this paper, we construct a model for cross-modal perception of glossiness by investigating the interaction between sounds and graphics. First, we conduct evaluation experiments on cross-modal glossiness perception using sounds and graphics stimuli. There are three types of stimuli in the experiments. The stimuli are visual stimuli (22 stimuli), audio stimuli (15 stimuli) and audiovisual stimuli (330 stimuli). Also, there are three sections in the experiments. The first one is a visual experiment, the second one is an audiovisual experiment, and the third one is an auditory experiment. For the evaluation of glossiness, the magnitude evaluation method is applied. Second, we analyze the influence of sounds on glossiness perception from the experimental results. The results suggest that the cross-modal perception of glossiness can be represented as a combination of visual-only perception and auditory-only perception. Then, based on the results, we construct a model by a linear sum of computer graphics and sound parameters. Finally, we confirm the feasibility of the cross-modal glossiness perception model through a validation experiment.
The sparkle impression is an important factor of appearance quality. The impression is generated by reflection from a material surface that contains metallic or pearl pigments. Although several methods of evaluating the impression have been proposed, there is insufficient correlation between the results of these methods and subjective evaluation because the impression depends on the observation distance. The present study developed a method of evaluating the sparkle impression considering the observation distance. To this end, a subjective evaluation experiment was performed for different observation distances and a measurement system comprising a spectral camera and lighting device was constructed. The evaluation model was proposed on the basis of the spatial frequency characteristics of the recorded image and human visual characteristics. The contribution ratio between subjective evaluation scores and evaluation values was high.
The accurate measurement of reflectance and transmittance properties of materials is essential in the printing and display industries in order to ensure precise color reproduction. In comparison with reflectance measurement, where the impact of different geometries (0°/45°, d/8°) has been thoroughly investigated, there are few published articles related to transmittance measurement. In this work, we explore different measurement geometries for total transmittance, and show that the transmittance measurements are highly affected by the geometry used, since certain geometries can introduce a measurement bias. We present a flexible custom setup that can simulate these geometries, which we evaluate both qualitatively and quantitatively over a set of samples with varied optical properties. We also compare our measurements against those of widely used commercial solutions, and show that significant differences exist over our test set. However, when the bias is correctly compensated, very low differences are observed. These findings therefore stress the importance of including the measurement geometry when reporting total transmittance.
With the advent of more sophisticated color and surface treatments in 3D printing, a more robust system for previewing these features is required. This work reviews current systems and proposes a framework for integrating a more accurate preview into 3D modelling systems.
3D printing is increasingly used for manufacturing final parts. The look and feel of final parts can be important, especially if they are intended for visible or wearable applications. We find that after parts are printed and cleaned, their aesthetic qualities can benefit from a variety of finishing processes. In this paper, we describe our post-processing work, largely for parts printed using HP’s Multi-Jet Fusion (MJF), and with an emphasis on techniques that scale sufficiently to be useful in manufacturing workflows. We detail our efforts using vibratory tumblers for smoothing, and coatings and dyes for visual appearance, as well as how we have used some of these techniques and methods for specific wearables.
Material appearance design usually requires an unintuitive selection of parameters for an analytical BRDF (bidirectional reflectance distribution functions) formula or time consuming acquisition of a BRDF table from a physical material sample. We propose a material design system that can take visual input in the form of images of an object with known geometry and lighting and produce a plausible BRDF table within a class of BRDFs. We use the principal components of a large dataset of BRDFs to reconstruct a full BRDF table from a sparse input of BRDF values at pairs of incoming and outgoing light directions. To get visual user input, we allow the user to provide their own object and then generate guide images with selected lighting directions. Once the user shades the guide images, we construct the reflectance table and allow the user to iteratively refine the material appearance. We present preliminary results for image-based design, and discuss the open issues remaining to make the approach practical for mainstream use.
Establishing accurate hair diagnosis at roots is a significant challenge with strong impact on hair coloration, beauty personalization and clinical evaluation. The roots of hair - viz. the first centimeter away from the scalp - represent clean hair fibers that have not been subjected to color change due to hair dyeing or environmental conditions. Therefore, they are a measure of a person’s baseline hair characteristics, including natural hair tone. A device that acquires high resolution macro images of hair roots under a well-defined illumination geometry has been designed in order to assess natural hair tones. Image analysis in this scenario is not a trivial task since the acquired images present an overlap of scalp and hair, with other possible artifacts due to dandruff and hair transparency. In this paper, we propose to train a Convolutional Neural Network (CNN) on a data-set composed of images from subjects who had their hair tone evaluated by trained color experts. Our method is compared with other popular CNNs as well as conventional color image processing approaches developed for this task. We found that the proposed model not only offers higher precision but also provides faster computation times, due to its lighter architecture in contrast to popular CNNs. Thus, we achieve sufficiently accurate results in real time on the low-end chip embedded in our device.