We discuss the problem of computationally generating images resembling those of lost cultural patrimony, specifically two-dimensional artworks such as paintings and drawings. We view the problem as one of computing an estimate of the image in the lost work that best conforms to surviving information in a variety of forms: works by the source artist, including preparatory works such as cartoons for the target work; copies of the target by other artists; other works by these artists that reveal aspects of their style; historical knowledge of art methods. and materials; stylistic conventions of the relevant era; textual descriptions of the lost work and as well as more generally, images associated with stories given by the target’s title. Some of the general information linking images and text can be learned from large corpora of natural photographs and accompanying text scraped from the web. We present some preliminary, proof-of-concept simulations for recovering lost artworks with a special focus on textual information about target artworks. We outline our future directions, such as methods for assessing the contributions of different forms of information in the overall task of recovering lost artworks.
The paper considers a problem of automatic analysis and noise suppression in dental X-Ray images, e.g., in images acquired by dental Morita system. Such images contain spatially correlated noise with unknown spectrum and with standard deviation that varies for different image regions. In the paper, we propose two deep convolutional neural networks. The first network estimates the spectrum and level of noise for each pixel of a noisy image, predicting maps of noise standard deviation for three image scales. The second network uses the maps as inputs to suppress noise in the image. It is shown, using modelled and real-life images, that the proposed networks provide PSNR for dental X-Ray images by 2.7 dB better than other modern denoising methods.
The automatic analysis of fine art paintings presents a number of novel technical challenges to artificial intelligence, computer vision, machine learning, and knowledge representation quite distinct from those arising in the analysis of traditional photographs. The most important difference is that many realist paintings depict stories or episodes in order to convey a lesson, moral, or meaning. One early step in automatic interpretation and extraction of meaning in artworks is the identifications of figures (“actors”). In Christian art, specifically, one must identify the actors in order to identify the Biblical episode or story depicted, an important step in “understanding” the artwork. We designed an auto-matic system based on deep convolutional neural net-works and simple knowledge database to identify saints throughout six centuries of Christian art based in large part upon saints’ symbols or attributes. Our work rep-resents initial steps in the broad task of automatic se- mantic interpretation of messages and meaning in fine art.
This study propose a robust 3D depth-map generation algorithm using a single image. Unlike previous related works estimating global depth-map using deep neural networks, this study uses the global and local feature of image together to reflect local changes in the depth map instead of using only global feature. A coarse-scale network is designed to predict the global-coarse depth map structure using a global view of the scene and the finer-scale random forest (RF) is to be designed to refine the depth map based on combination of original image and coarse depth map. As the first step, we use a partial structure of the multi-scale deep network (MSDN) to predict the depth of the scene at a global level. As the second step, we propose local patchbased deep RF to estimate the local depth and smoothen noise of local depth map by combining MSDN global-coarse network. The proposed algorithm was successfully applied to various single images and yielded a more accurate depthmap estimation performance than other existing methods.
This study focuses on real-time pedestrian detection using thermal images taken at night because a number of pedestrian–vehicle crashes occur from late at night to early dawn. However, the thermal energy between a pedestrian and the road differs depending on the season. We therefore propose the use of adaptive Boolean-map-based saliency (ABMS) to boost the pedestrian from the background based on the particular season. For pedestrian recognition, we use the convolutional neural network based pedestrian detection algorithm, you only look once (YOLO), which differs from conventional classifier-based methods. Unlike the original version, we combine YOLO with a saliency feature map constructed using ABMS as a hardwired kernel based on prior knowledge that a pedestrian has higher saliency than the background. The proposed algorithm was successfully applied to the thermal image dataset captured by moving vehicles, and its performance was shown to be better than that of other related state-of-the-art methods. © 2017 Society for Imaging Science and Technology.