This conference on computer image analysis in the study of art presents leading research in the application of image analysis, computer vision, and pattern recognition to problems of interest to art historians, curators and conservators. A number of recent questions and controversies have highlighted the value of rigorous image analysis in the service of the analysis of art, particularly painting. Consider these examples: the fractal image analysis for the authentication of drip paintings possibly by Jackson Pollock; sophisticated perspective, shading and form analysis to address claims that early Renaissance masters such as Jan van Eyck or Baroque masters such as Georges de la Tour traced optically projected images; automatic multi-scale analysis of brushstrokes for the attribution of portraits within a painting by Perugino; and multi-spectral, x-ray and infra-red scanning and image analysis of the Mona Lisa to reveal the painting techniques of Leonardo. The value of image analysis to these and other questions strongly suggests that current and future computer methods will play an ever larger role in the scholarship of visual arts.
Our central goal was to create automatic methods for semantic segmentation of human figures in images of fine art paintings. This is a difficult problem because the visual properties and statistics of artwork differ markedly from the natural photographs widely used in research in automatic segmentation. We used a deep neural network to transfer artistic style from paintings across several centuries to modern natural photographs in order to create a large data set of surrogate art images. We then used this data set to train a separate deep network for semantic image segmentation of genuine art images. Such data augmentation led to great improvement in the segmentation of difficult genuine artworks, revealed both qualitatively and quantitatively. Our unique technique of creating surrogate artworks should find wide use in many tasks in the growing field of computational analysis of fine art.
We present a novel bi-modal system based on deep networks to address the problem of learning associations and simple meanings of objects depicted in "authored" images, such as ne art paintings and drawings. Our overall system processes both the images and associated texts in order to learn associations between images of individual objects, their identities and the abstract meanings they signify. Unlike past deep net that describe depicted objects and infer predicates, our system identies meaning-bearing objects ("signifiers") and their associations ("signifieds") as well as basic overall meanings for target artworks. Our system had precision of 48% and recall of 78% with an F1 metric of 0.6 on a curated set of Dutch vanitas paintings, a genre celebrated for its concentration on conveying a meaning of great import at the time of their execution. We developed and tested our system on ne art paintings but our general methods can be applied to other authored images.
Deep neural networks for semantic segmentation have recently outperformed other methods for natural images, partly due to the abundance of training data for this case. However, applying these networks to pictures from a different domain often leads to a significant drop in accuracy. Fine art paintings for highly stylized works, such as from Cubism or Expressionism, in particular, are challenging due to large deviations in shape and texture of certain objects when compared to natural images. In this paper, we demonstrate that style transfer can be used as a form of data augmentation during the training of CNN based semantic segmentation models to improve the accuracy of semantic segmentation models in art pieces of a specific artist. For this, we pick a selection of paintings from a specific style for the painters Egon Schiele, Vincent Van Gogh, Pablo Picasso and Willem de Kooning, create stylized training dataset by transferring artist-specific style to natural photographs and show that training the same segmentation network on surrogate artworks improves the accuracy for fine art paintings. We also provide a dataset with pixel-level annotation of 60 fine art paintings to the public and for evaluation of our method.
We present the results of our image analysis of portrait art from the Roman Empire's Julio-Claudian dynastic period. Our novel approach involves processing pictures of ancient statues, cameos, altar friezes, bas-reliefs, frescoes, and coins using modern mobile apps, such as Reface and FaceApp, to improve identification of the historical subjects depicted. In particular, we have discovered that the Reface app has limited, but useful capability to restore the approximate appearance of damaged noses of the statues. We confirm many traditional identifications, propose a few identification corrections for items located in museums and private collections around the world, and discuss the advantages and limitations of our approach. For example, Reface may make aquiline noses appear wider or shorter than they should be. This deficiency can be partially corrected if multiple views are available. We demonstrate that our approach can be extended to analyze portraiture from other cultures and historical periods. The article is intended for a broad section of the readers interested in how the modern AI-based solutions for mobile imaging merge with humanities to help improve our understanding of the modern civilization's ancient past and increase appreciation of our diverse cultural heritage.