Image style transfer, which involves remapping the content of a specified image with a style image, represents a current research focus in the field of artificial intelligence and computer vision. The proliferation of image datasets and the development of various deep learning models have led to the introduction of numerous models and algorithms for image style transfer. Despite the notable successes of deep learning based style transfer in many areas, it faces significant challenges, notably high computational costs and limited generalization capabilities. In this paper, we present a simple yet effective method to address these challenges. The essence of our approach lies in the integration of wavelet transforms into whitening and coloring processes within an image reconstruction network (WTN). The WTN directly aligns the feature covariance of the content image with that of the style image. We demonstrate the effectiveness of our algorithm through examples, generating high-quality stylized images, and conduct comparisons with several recent methods.
We applied computational style transfer, specifically coloration and brush stroke style, to achromatic images of a ghost painting beneath Vincent van Gogh's <i>Still life with meadow flowers and roses</i>. Our method is an extension of our previous work in that it used representative artworks by the ghost painting\rq s author to train a Generalized Adversarial Network (GAN) for integrating styles learned from stylistically distinct groups of works. An effective amalgam of these learned styles is then transferred to the target achromatic work.
We apply generative adversarial convolutional neural networks to the problem of style transfer to underdrawings and ghost-images in x-rays of fine art paintings with a special focus on enhancing their spatial resolution. We build upon a neural architecture developed for the related problem of synthesizing high-resolution photo-realistic image from semantic label maps. Our neural architecture achieves high resolution through a hierarchy of generators and discriminator sub-networks, working throughout a range of spatial resolutions. This coarse-to-fine generator architecture can increase the effective resolution by a factor of eight in each spatial direction, or an overall increase in number of pixels by a factor of 64. We also show that even just a few examples of human-generated image segmentations can greatly improve—qualitatively and quantitatively—the generated images. We demonstrate our method on works such as Leonardo’s Madonna of the carnation and the underdrawing in his Virgin of the rocks, which pose several special problems in style transfer, including the paucity of representative works from which to learn and transfer style information.
We describe the application of convolutional neural network style transfer to the problem of improved visualization of underdrawings and ghost-paintings in fine art oil paintings. Such underdrawings and hidden paintings are typically revealed by x-ray or infrared techniques which yield images that are grayscale, and thus devoid of color and full style information. Past methods for inferring color in underdrawings have been based on physical x-ray uorescence spectral imaging of pigments in ghost-paintings and are thus expensive, time consuming, and require equipment not available in most conservation studios. Our algorithmic methods do not need such expensive physical imaging devices. Our proof-ofconcept system, applied to works by Pablo Picasso and Leonardo, reveal colors and designs that respect the natural segmentation in the ghost-painting. We believe the computed images provide insight into the artist and associated oeuvre not available by other means. Our results strongly suggest that future applications based on larger corpora of paintings for training will display color schemes and designs that even more closely resemble works of the artist. For these reasons refinements to our methods should find wide use in art conservation, connoisseurship, and art analysis.
Digital watermarking technologies are based on the idea of embedding a data-carrying signal in a semi covert manner in a given host image. Here we describe a new approach in which we render the signal itself as an explicit artistic pattern, thereby hiding the signal in plain sight. This pattern may be used as is, or as a texture layer in another image for various applications. There is an immense variety of signal carrying patterns and we present several examples. We also present some results on the detection robustness of these patterns.