We discuss the problem of computationally generating images resembling those of lost cultural patrimony, specifically two-dimensional artworks such as paintings and drawings. We view the problem as one of computing an estimate of the image in the lost work that best conforms to surviving information in a variety of forms: works by the source artist, including preparatory works such as cartoons for the target work; copies of the target by other artists; other works by these artists that reveal aspects of their style; historical knowledge of art methods. and materials; stylistic conventions of the relevant era; textual descriptions of the lost work and as well as more generally, images associated with stories given by the target’s title. Some of the general information linking images and text can be learned from large corpora of natural photographs and accompanying text scraped from the web. We present some preliminary, proof-of-concept simulations for recovering lost artworks with a special focus on textual information about target artworks. We outline our future directions, such as methods for assessing the contributions of different forms of information in the overall task of recovering lost artworks.
Jesper Eriksson, George H. Cann, Anthony Bourached, David G. Stork, "Recovering lost artworks by deep neural networks: Motivations, methodology, and proof-of-concept simulations" in Electronic Imaging, 2023, pp 210-1 - 210-7, https://doi.org/10.2352/EI.2023.35.13.CVAA-210