In this paper, we propose a method to estimate ink layer layout used as an input for 3D printer. This method makes it possible to reproduce a 3D printed patch that gives a desired translucency, which is represented as Line Spread Function (LSF) in this study. Deep neural networks of encoder decoder model is used for the estimation. In a previous research, it is reported that machine learning method is effective to formulate the complex relationship between the optical property such as LSF and the ink layer layout in 3D printer. However, it may be difficult to collect data large enough to train a neural network sufficiently. Especially, although 3D printer is getting more and more widespread, the printing process is still time consuming. Therefore, in this research, we prepare the training data, which is the correspondence between LSF and ink layer layout in 3D printer, by simulating it on a computer. MCML was used to perform the simulation. MCML is a method to simulate subsurface scattering of light for multi-layered media. Deep neural network was trained with the simulated data, and evaluated using a CG skin object. The result shows that our proposed method can estimate an appropriate ink layer layout which reproduce the appearance close to the target color and translucency.
The paper present a method to estimate appearance difference of two 3D objects, such as 3D prints, using an RGB camera under controlled lighting environment. It consists of three parts. Firstly, calculating image color differences after geometry alignment under different light sources. Secondly, estimating glossiness of the objects with a movable light source. And finally, psychophysical data are used to determine the parameters for estimating appearance differences of 3D prints.
The Digital Humanities Lab (DHLab) is a technological oriented research group within the faculty of Humanities of the University of Basel. The research profile of the DHLab integrates computer science, digital imaging, computational photography, the accessibility of digital objects and solutions for digital preservation with the aim to support scholars in the emerging field of humanities research. The project Digital Materiality, a collaboration with the Seminar of Art History, examines how new imaging and visualization methods can be applied to describe the reflection of light on surfaces of artworks. Of main interest are mosaics and early prints; both types of artworks have a strong interaction with light that can not captured adequately by standard photographic approaches. Conventional photographic processes are not able to capture the dynamic component of the light-surface interdependence that is specific for this kind of art. The technical part of the project focuses on improving the methodology that is in general described as Reflectance Transformation Imaging (RTI) techniques. RTI is an approach based on a mathematical model, that is fitted in image data derived from photographs. By modifying the mathematical model that is used to represent the light-surface interaction, the digital reproduction can be improved so that relevant attributes of artworks can be transported into the digital domain in such a way that the requirements and needs of humanities researchers to a digital representation can be fulfilled. The improvements take into account more sophisticated but still robust and simple reflection models for the realistic visualization of localized diffuse and specular surfaces within the same digital reproduction. For humanities research the most important aspect is the compatibility to web-based Virtual Research Environments (VRE). The presented approach can be integrated in such environments because the client-side visualization is implemented in WebGL. For flexibility, performance and data permanence aspects the server side will be an enhanced image server layer following the International Image Interoperability Framework (IIIF) image based environment.