In this paper, we propose a method to estimate ink layer layout used as an input for 3D printer. This method makes it possible to reproduce a 3D printed patch that gives a desired translucency, which is represented as Line Spread Function (LSF) in this study. Deep neural networks of encoder decoder model is used for the estimation. In a previous research, it is reported that machine learning method is effective to formulate the complex relationship between the optical property such as LSF and the ink layer layout in 3D printer. However, it may be difficult to collect data large enough to train a neural network sufficiently. Especially, although 3D printer is getting more and more widespread, the printing process is still time consuming. Therefore, in this research, we prepare the training data, which is the correspondence between LSF and ink layer layout in 3D printer, by simulating it on a computer. MCML was used to perform the simulation. MCML is a method to simulate subsurface scattering of light for multi-layered media. Deep neural network was trained with the simulated data, and evaluated using a CG skin object. The result shows that our proposed method can estimate an appropriate ink layer layout which reproduce the appearance close to the target color and translucency.
The still imaging portion of FADGI [1] continues to be a living document that has evolved from its theoretical digital imaging principles of a decade ago into adaptations for the realities of day-to-day cultural heritage workflows. While the initial document was a bit disjointed, the 2016 version is a solid major improvement and has proven very useful in gauging digital imaging goodness. [2] With coaching, encouragement and focused attention to detail many users, even the unschooled, have achieved 3-star compliance, sometimes with high-speed sheet-fed document scanners. 4-star levels are not far behind. This is a testimony to an improved digital image literacy for the cultural heritage sector that the authors articulated at the beginning of the last decade. This objective and science based literacy has certainly evolved and continues to do so. It is fair to say that no other imaging sector has such comprehensive objective imaging guidelines as those of FADGI, especially in the context of high volume imaging workflows. While initial efforts focused on single instance device benchmarking, future work will concentrate on performance consistency over the long term. Image digitization for cultural heritage will take on a decidedly industrial tone. With practice, we continue to learn and refine the practical application of FADGI guidelines in the preservation of meaningful information. Like rocks in a farm field, every year new issues and errors with current practices surface that were previously hidden from view. Some are incidental, others need short term resolution. The goal of this paper is to highlight these and make proposals for easier, less costly, and less frustrating ways to improve imaging goodness through the FADGI guidelines.
Today, most advanced mobile phone cameras integrate multi-image technologies such as high dynamic range (HDR) imaging. The objective of HDR imaging is to overcome some of the limitations imposed by the sensor physics, which limit the performance of small camera sensors used in mobile phones compared to larger sensors used in digital single-lens reflex (DSLR) cameras. In this context, it becomes more and more important to establish new image quality measurement protocols and test scenes that can differentiate the image quality performance of these devices. In this work, we describe image quality measurements for HDR scenes covering local contrast preservation, texture preservation, color consistency, and noise stability. By monitoring these four attributes in both the bright and dark parts of the image, over different dynamic ranges, we benchmarked four leading smartphone cameras using different technologies and contrasted the results with subjective evaluations.
Due to the fast evolving technologies and the increasing importance of Social Media, the camera is one of the most important components of today's mobile phones. Nowadays, smartphones are taking over a big share of the compact camera market. A simple reason for this might be revealed by the famous quote: "The best camera is the one that's with you". But with the vast choice of devices and great promises of manufacturers, there is a demand to characterize image quality and performance in very simple terms in order to provide information that helps choosing the best-suited device. The current existing evaluation systems are either not entirely objective or are under development and haven't reached a useful level yet. Therefore the industry itself has gotten together and created a new objective quality evaluation system named Valued Camera eXperience (VCX). It is designed to reflect the user experience regarding the image quality and the performance of a camera in a mobile device. Members of the initiative so fare are: Apple, Huawei, Image Engineering, LG, Mediatec, Nomicam, Oppo, TCL, Vivo, and Vodafone.
This paper presents a new metric for evaluating the color perceptual smoothness of color transformations. The metric estimates three dimensional smoothness to cover the full gamut of the transform. This metric predicts any artifacts like jumps in any gradient introduced by the transformation itself. From the state of the art, three works have been found and compared for evaluating their pros and cons. Based on these previous proposals, a new metric has been developed and tested with several applications. The metric is based on the perceptual distance: CIEDE2000. The defined metric is dependent on the number of ramps and the number of colors per ramp but these two parameters can be reduced to a single one called granularity. The proposed metric has been applied on the AdobeRBG and sRGB color spaces with and without the addition of artificial artifacts and tested for a large variety of granularity values. Several basic statistics have been proposed and the root mean square seems to be a good candidate for representing the global smoothness. The metric demonstrated robustness for evaluating the global smoothness of a transform and also or detecting small jumps.
Right now there are at least three publicly known ranking systems for cell phones (CPIQ [IEEE P1858, in preparation, DxOmark, VCX) that try to tell us which camera phone provides the best image quality. Now that IEEE is about to publish the P1858 standard with currently only 6 Image quality parameters the question arises how many parameters are needed to characterize a camera in a current cell phone and how important is each factor for the perceived quality. For testing the importance of a factor the IEEE cellphone image quality group (CPIQ) has created psychophysical studies for all 6 image quality factors that are described in the first version of IEEE P1858. That way a connection between the physical measurement of the image quality aspect and the perceived quality can be made.
Visual attention refers to the cognitive mechanism that allows us to select and process only the relevant information arriving at our eyes. Therefore, eye movements will have a significant dependency on visual attention. Saliency models, trying to simulate visual gaze and consequently, visual attention, have been continuously developed over the last years. Color information has been shown to play an important role in visual attention, and it is used in saliency computations. However, psychophysical evidence explaining the relationship between color and saliency is lacking. The results of the experiment will be presented aiming at studying and quantifying saliency of colors of different hues and lightness specified in CIELab coordinates. In the experiment, 12 observers were asked to report the number of color patches presented at random locations on a masking gray background. Eye movements were recorded using an SMI remote eye tracking system and being used to validate the reported data. In the presentation, we will compare the reported data and visual gaze data for different colors and discuss implications for our understanding of color saliency and color processing.
The performance of Color Prediction Methods CAT02, KSM2, Waypoint, Best Linear, MMV center, and relit color signal are compared in terms of how well they explain Logvinenko & Tokunaga [1] asymmetric color matching results. In their experiment, given a Munsell paper under a test illuminant, 4 observers were asked to determine (3 repeats) which of 22 other Munsell papers made the least-dissimilar match under a match illuminant. Given this data, we address the following four questions. Question 1: Are observers choosing the original Munsell paper under the match illuminant? If they are, then the average (12 matches) color signal (cone response triple or XYZ) made under a given illuminant condition should correspond to that of the Munsell paper's color signal under the match illuminant. Computation shows that in 274 of the 400 cases, the relit color signal is close to the mean color signal of the matches. Question 2: How do algorithm predictions compare to the average observer prediction of the actual color signal of the relit paper? The Wilcoxon signed-rank test shows that KSM2, Waypoint, and Best Linear perform equally, and that both slightly outperform the observer average, which, in turn, significantly outperforms CAT02, and MMV (metamer mismatch volume) center. Question 3: Which method most closely predicts the observer average? We found that the color signal of the relit reflectance is a better predictor of the average observer than Best Linear, which in turn is marginally better than Wpt and KSM2, both of which outperform CATO2 and MMV center. Question 4: Do the observers agree with one another? Using a leave-one-observer-out comparison shows that individual observers predict the average matches of the remaining observers somewhat better than the relit color signal, which in turn slightly outperforms Best Linear, Wpt and KSM2, which then all significantly outperform CATO2 and MMV center.