In this paper, we propose a method to estimate ink layer layout used as an input for 3D printer. This method makes it possible to reproduce a 3D printed patch that gives a desired translucency, which is represented as Line Spread Function (LSF) in this study. Deep neural networks of encoder decoder model is used for the estimation. In a previous research, it is reported that machine learning method is effective to formulate the complex relationship between the optical property such as LSF and the ink layer layout in 3D printer. However, it may be difficult to collect data large enough to train a neural network sufficiently. Especially, although 3D printer is getting more and more widespread, the printing process is still time consuming. Therefore, in this research, we prepare the training data, which is the correspondence between LSF and ink layer layout in 3D printer, by simulating it on a computer. MCML was used to perform the simulation. MCML is a method to simulate subsurface scattering of light for multi-layered media. Deep neural network was trained with the simulated data, and evaluated using a CG skin object. The result shows that our proposed method can estimate an appropriate ink layer layout which reproduce the appearance close to the target color and translucency.
Convolutional neural networks (CNNs) tend to look at methods to improve performance by adding more input data, making modifications on existing data, or changing the design of the network to better suit the variables. The goal of this work is to supplement the small number of existing methods that do not use one of the previously mentioned techniques. This research aims to show that with a standard CNN, classification accuracy rates have the potential to be improved without changes to the data or major network design modifications, such as adding convolution or pooling layers. A new layer is proposed that will be inserted in a similar location as the non-linearity functions in standard CNNs. This new layer creates a localized connectivity for each perceptron to a polynomial of Nth degree. This can be performed in both the convolutional portion and the fully connected portion of the network. The proposed polynomial layer is added with the idea that the higher dimensionality enables a better description of the input space, which leads to a higher classification rate. Two different datasets, MNIST and CIFAR10 are utilized for classification, each containing 10 distinct classes and having similar training set sizes, 60,000 and 50,000 images, respectively. These datasets differ in that the images are 28 × 28 grayscale and 32 × 32 RGB, respectively. It is shown that the added polynomial layers enable the chosen CNN design to have a higher rate of accuracy on the MNIST dataset. This effect was only similar at a lower learning rate with the CIFAR10 dataset.
Traditional quality estimators evaluate an image's resemblance to a reference image. However, quality estimators are not well suited to the similar but somewhat different task of utility estimation, where an image is judged instead by how useful it would be in terms of extracting information about image content. While full-reference image utility metrics have been developed which outperform quality estimators for the utility-prediction task, assuming the existence of a high-quality reference image is not always realistic. The Oxford Visual Geometry Group's (VGG) deep convolutional neural network (CNN) [1], designed for object recognition, is modified and adapted to the task of utility estimation. This network achieves no-reference utility estimation performance near the full-reference state of the art, with a Pearson correlation of 0.946 with subjective utility scores of the CU-Nantes database and root mean square error of 12.3. Other noreference techniques adapted from the quality domain yield inferior performance. The CNN also generalizes better to distortion types outside of the training set, and is easily updated to include new types of distortion. Early stages of the network apply transformations similar to those of previously developed full-reference utility estimation algorithms.