Back to articles
Regular Article
Volume: 7 | Article ID: 000501
Image
Pictorial Research: Objective versus Subjective Annotations on Avercamp and Rembrandt
  DOI :  10.2352/J.Percept.Imaging.2024.7.000501  Published OnlineJune 2024
Abstract
Abstract

Pictorial research can rely on computational or human annotations. Computational annotations offer scalability, facilitating so-called distant-viewing studies. On the other hand, human annotations provide insights into individual differences, judgments of subjective nature. In this study, we demonstrate the difference in objective and subjective human annotations in two pictorial research studies: one focusing on Avercamp’s perspective choices and the other on Rembrandt’s compositional choices. In the first experiment, we investigated perspective handling by the Dutch painter Hendrick Avercamp. Using visual annotations of human figures and horizons, we could reconstruct the virtual viewpoint from where Avercamp depicted his landscapes. Results revealed an interesting trend: with increasing age, Avercamp lowered his viewpoint. In the second experiment, we studied the compositional choice that Rembrandt van Rijn made in Syndics of the Drapers’ Guild. Based on imaging studies it is known that Rembrandt doubted where to place the servant, and we let 100 annotators make the same choice. Subjective data was in line with evidence from imaging studies. Aside from having their own merit, the two experiments demonstrate two distinctive ways of performing pictorial research, one that concerns the picture alone (objective) and one that concerns the relation between the picture and the viewer (subjective).

Subject Areas :
Views 54
Downloads 13
 articleview.views 54
 articleview.downloads 13
  Cite this article 

M. W. A. Wijntjes, M. J. P. van Zuijlen, "Pictorial Research: Objective versus Subjective Annotations on Avercamp and Rembrandtin Journal of Perceptual Imaging,  2024,  pp 1 - 6,  https://doi.org/10.2352/J.Percept.Imaging.2024.7.000501

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2024
  Article timeline 
  • received July 2023
  • accepted April 2024
  • PublishedJune 2024

Preprint submitted to:
jpi
Journal of Perceptual Imaging
J. Percept. Imaging
J. Percept. Imaging
2575-8144
Society for Imaging Science and Technology
1.
Introduction
A variety of scientific disciplines are involved in pictorial research, from perception psychology and computer science to (digital) art history. Recently, the art historical study of pictures has shifted from close viewing, the study of individual pictures, towards distant viewing [1, 2], where a large body of pictures are analyzed using computational methods. This computational approach affords cultural analytics [26]: quantifying visual trends and structures in large corpora of images. The data used for these analyses are computational annotations [2], which can range from image statistics [16] and head posture [9, 43] to many other levels of visual representation.
There is thus a paradigm shift from the human eye studying a single work to a computational eye studying (large) collections of works. This paradigm shift takes place along two dimensions, from humans to computers and from close viewing to distant viewing. Although the convenience of computational scalability likely inspired distant viewing, the two dimensions are independent: it is possible to perform close viewing with computers [20] or distant viewing with humans [42]. However, there is one dimension to be added to draw a more complete picture of possible paradigms for pictorial research: the difference between subjective and objective annotations, as illustrated in Figure 1. While computational annotations are mostly deterministic and display noise that is only attributable to their performance accuracy, the variations in human annotations can be of various nature: aside from accuracy, it can also reflect individual differences.
To explore this distinction between objective and subjective annotations, we will first review existing work in these two fields and then present two case studies that exemplify an objective and a subjective annotation experiment.
1.1
Objective Annotations
The focus of an objective annotation study lies primarily in the picture. It uses the human as an (intelligent) measurement device, also known as human computation [14]. Many visual annotation studies come from the field of computer vision and aim at creating ground truth data for machine learning. A well-known (by now historical) example is LabelMe [36], which aimed at object labeling and polygonal segmentation (where the object is manually outlined). A crowdsourcing platform that has often been used is Amazon Mechanical Turk (AMT). Segmentation and labeling are now default options on AMT, but at the time the LabelMe platform facilitated many other segmentation and annotation studies. Besides plain segmentation, a considerable number of “richly annotated databases” have been created. These studies often involve not only the labeling/manipulation of 3D data such as position [11], 3D meaning of sketch lines [17], and 3D surface attitude [8] but also material reflectance [3] and shadows [24].
Computer vision and graphics are not the only fields making use of objective annotations. Museums annotate artifacts with information about artworks, such as artist name, year, medium, provenance, etc., namely, the metadata. This labeling was traditionally performed by art historians, but over the past two decades various digital tools have been developed [19, 37]. These tools can be used not only by professionals but also through crowdsourcing such as the Art UK painting tagger [13]. Furthermore, playful interactions have been designed to motivate volunteers to participate in enriching metadata [45]. Although many projects aim at non-spatial tagging, for instance IconClass [4] labels, there is also a need for spatial (e.g. a bounding box) annotations [38]. Collecting bounding box data on a large scale can lead to interesting insights into digital art history such as spatial distributions of material depictions [42]. While computer vision also started with simple bounding boxes and evolved towards more complex annotations such as those in [14], the same trend has started to emerge in art history. For example, in [46], the authors added annotations of people, shadows, and highlights, which allowed for the analysis of stylistic trends such as perspective viewpoint and light direction.
Although there are obvious differences in rationale between computer vision/graphics and digital art history annotations, they share their focus on the picture. Insofar as the annotator plays a role, it concerns accuracy [25, 40] or expert versus crowd [31]. In the next section, we will review crowdsourced perception research where the picture has a totally different role, i.e. that of stimulus eliciting certain perceptions and appreciations.
1.2
Subjective Annotations
Collecting data through online crowdsourcing experiments has become the standard research practice for the behavioral sciences. There are various possible motivations to choose online over lab experiments, such as speed and efficiency, but also more specific reasons such as access to certain subject pools [6, 32]. Performance seems generally not to be degraded [5] although participants sometimes lack attention [15]. Most paradigms from experimental psychology give similar results when conducted online as compared to lab experiments [10, 18].
Without reviewing the whole perception literature, we will compare some annotation studies to perception research. Categorizations (labeling) in objective annotation research [36] often serve as “ground truth” and need to be unambiguous. In perception studies, object categorizations may rely on various independent variables such as background [12], and research is particularly interested in ambiguity [30]. Computer vision research on 3D inference from images as reviewed above [8, 11, 17] seems to ignore abundant evidence that these annotations are perceptually ambiguous [22, 23, 39]. The 3D inference relies on how the picture (or its actual representation) is viewed [21, 47, 48]. Furthermore, the material probe used by [3], which let observers manipulate gloss parameters, is well known in the perception literature [33]. It seems that many objective annotation studies assume perfect perceptual constancy (i.e. perceptual invariance with respect to extrinsic changes such as light, shape, or viewpoint). However, the aforementioned gloss perception can hardly be considered perceptually constant [7, 27, 29, 44].
One could argue that all perception experiments are subjective annotations of the stimuli used. Yet the goal of most perception research is to generalize across stimuli, which renders the annotation meaningless. Still, some pictures have meaning beyond the perception experiment because they are also studied by other scientific disciplines such as media studies or art history. For example, Yarbus [50] measured the eye movements elicited by “Unexpected Visitors” by Ilya Repin. Yarbus’s goal was to measure the effect of different instructions (e.g. how wealthy the family is, the activity, describe clothes, etc.), but at the same time the eye movements tell us something about how that specific painting of Ilya Repin is viewed. This can be seen as a subjective annotation. To move forward the discussion of objective and subjective annotations, we now present two experiments and continue the discussion afterwards.
2.
Experiment 1: Objective Annotation
One of the many elements of artistic style is perspective, i.e. how pictorial space is constructed. Radically different projection systems can be found across art history [49]. Also more subtle patterns have been found such as changes in viewpoint elevation [46]. In this experiment, we used a similar paradigm for tracing the horizon and human figures based on the principle of horizon ratio [35]. We choose to analyze the paintings of Hendrick Avercamp, a Dutch painter from the early 17th century. He is particularly famous for his winter landscapes, often depicting people ice-skating, which happen to be particularly suited for perspective reconstruction because ice is a horizontal plane.
2.1
Procedure
We used the platform Amazon Mechanical Turk to recruit and reimburse participants. We primarily used p5.js to program the visual interaction. The name p5 originates from “Proce55ing,” an alternative spelling of “Processing,” which is a widely used programming language [34] “in the context of visual arts” (https://processing.org). P5.js is a JavaScript library that shares much of the functionality of Processing [28]. Both Processing and p5.js aim to make code accessible to a wide audience and are thus relatively easy to use by beginners. A program created by a user is called a sketch, emphasizing the iterative design process with immediate visual feedback. We found it works well in many different kinds of annotation experiments.
Participants were instructed to first annotate where they saw the horizon by adjusting a red horizontal line. Then they had to draw lines between the feet and the head for about 10 to 15 people in the scene. Participants were encouraged to choose people who were at various distances, including those far away (and tiny on the screen).
2.2
Participants
The experiment was split into three blocks for which each 10 participants were recruited.
2.3
Data Analysis
A linear regression of the vertical distance between the feet and the horizon as a function of human sizes was performed. The slope of the model has a direct meaning: it is the height of the perspective viewpoint in terms of human sizes. For example, if the slope is 2, the viewpoint of the painter was (virtually) 2 human lengths (approximately 3.5 m) high. We not only set the offset to zero for the actual elevation data but also used it as a free parameter and found little difference (blue and orange lines in Figure 2).
Figure 1.
Various types of pictorial research. 1. Distant viewing with computers deals with analyzing connections within collections of images based on computational image analysis. 2. Close viewing with computational help is related to the study of a single picture, for example, an optical reconstruction. 3. Using objective human annotations for distant viewing analysis is related to the experiment on Avercamp in the current study. 4. Close viewing using objective human annotations could for example also be related to drawing perspective lines. 5. Subjective annotations for distant viewing could for example be beauty or preference ratings. 6. Subjective annotations for close viewing are exemplified by the study on Rembrandt in the current paper.
2.4
Pictures
We first downloaded all Avercamp paintings we could find from WikiArt. Then we selected 33 that showed ice (ensuring a flat horizontal ground) and multiple people. We used the WikiArt metadata for production years.
2.5
Results and Discussion
In Figures 2 and 3, representative examples are displayed on the left and the overall perspective elevation over time is shown on the right. Before discussing these results, it should be reported that horizon estimation turned out to be rather fluctuating and seemed to be met with some confusion. Apparently, the concept of horizon is very clear to the authors, but not to the general AMT worker, or at least not in the way it was explained in the experiment. For further analysis, we choose one representative participant whose horizon data were used for further computations. Note that we did use the complete set of human figure annotations of all participants; only the horizon line annotation was taken from a single observer.
Figure 2.
(a) Three examples of data. The (reduced contrast) paintings are overlaid with raw annotation data. Different colors denote different annotators. The slope in scatterplots indicates elevation of the viewpoint. (b) Here, viewpoint elevations are plotted over time. All images are shown under years. As can be seen, a negative trend is clearly visible.
Figure 3.
On the left, the original painting is shown: Syndics of the Drapers’ Guild (1662) by Rembrandt. On the right, the experimental interface is shown. As can be seen, the servant is cut out of the original and can be freely positioned anywhere on the canvas. Painting reproduced from The Rijksmuseum, The Netherlands, under a CC 0.0 license.
The negative slope (a decrease of 10% human lengths per year) was highly significant (t(32) = −4.84405, p < 0.0001). Furthermore, the coefficient of determination was moderate (R2 = 0.43) indicating that although the trend itself is significant, it only explains 43% of the variance in perspective attitudes found in the 33 pictures.
The results imply that Avercamp lowered his viewpoint as he became older. The effect is rather strong, and we did not anticipate this effect. Interestingly, the same trend is found for Canaletto [46], an Italian painter who was active a century after Avercamp. As these two painters seem unrelated, it raises the question whether there is a general trend among artists when it comes to perspective elevation and age.
3.
Experiment 2: Subjective Annotation
In the second experiment, we aim to answer a question of artistic composition. Syndics of the Drapers’ Guild by Rembrandt is an intriguing painting: the unconventional viewpoint; the feeling of interrupting a meeting that just started. Moreover, x-ray studies have shown [41] that Rembrandt had doubts on where to position the servant, the person in the middle behind the others. According to [41], Rembrandt initially planned to position this person at the far right of the painting. Informal conversations with art historians led us to an interesting question: Where would a naive viewer position the servant, and how does this compare to Rembrandt’s choice and rejection?
3.1
Participants
A total of 100 participants were recruited on AMT.
3.2
Procedure
Participants were instructed to position the servant at the location that they believed resulted in the best composition. The experiment is shown in Fig. 3. The initial position of the servant depended on where the user’s mouse was at that moment, and was thus not actively randomized.
3.3
Picture
On the left side of Fig. 3, the original (unedited) image of “Syndics of the Drapers” Guild is shown. We used photo-editing software to cut out the foreground scene and the servant. The place of the servant was filled with image elements of the remaining scene so as to not give away Rembrandt’s choice.
3.4
Results and Discussion
The compositional preferences of 100 participants are visualized in Figure 4. It can be readily seen that four horizontal locations dominate the data. One of them is similar to the actual painting, and the far right alternative is similar to Rembrandt’s underdrawing, i.e. the initial “sketch” [41]. Perhaps, the other two locations have also been considered by Rembrandt.
Figure 4.
Visualization of 100 responses where the servant was placed by participants. There are four dominant horizontal positions with more or less equal probability except the one on the left.
While the preferred horizontal positions seem to be discrete, the vertical preference is more continuous. The range of positions seems to be limited by what is anatomically and ergonomically plausible: somewhere between a sitting and standing position. It is interesting to consider what effect these horizontal and vertical variations have on the perception of the narrative.
4.
General Discussion
We presented two visual experiments that demonstrate the difference between objective and subjective annotations in the context of close and distant viewing. Both studies make use of spatial annotations, and both concern artistic choices of composition. Yet the perspective study relies on objective annotations while the composition study relies on subjectivity. What makes the two results categorically different, and how do they relate to existing pictorial research?
Individual differences are the core difference between data interpretation of objective versus subjective annotations. In both experiments, the data contains substantial variance: in Fig. 2, the linear regression plots show a certain noise level and in Fig. 4, we see continuous distributions along the vertical direction and discrete, multi-modal distributions along the horizontal direction. What do these variances mean? In both cases, part of the noise emerges from human inaccuracies. This inherent inaccuracy of human judgment can be interesting in a purely behavioral context but is considered as irrelevant measurement noise in our context. However, this inherent noise does not seem responsible for all variances. In the first study, part of the variance can be attributed to the pictorial content. In Fig. 2(a), the residuals of the linear regressions denote how homogeneously the figure sizes diminished towards the horizon. Deviations from the regression line are either due to natural variations of human sizes or inaccurate perspective handling. Although we cannot dissociate between these two, we should realize that both explanations concern the depiction and not the viewer. In Fig. 2(b), the residuals refer to differences in viewpoint. We found that although the downward trend with time was significant, still 57% of the variance was unexplained by this trend. Again, this can be due to various reasons but they all concern the circumstances under which the picture was produced, e.g. artistic or practical choices for viewpoint difference. The variance in the data does not concern the observer.
On the other hand, Experiment 2 concerns the relation between the picture and the viewer. The variation in preferences shows clear clusters that arise from multiple individual judgments. The annotations are characterized by the individual differences of the observers. They reveal the different artistic choices that could have been made. Other than the viewpoint attitude of Avercamp, Syndics of the Drapers’ Guild affords immediate visualization of a compositional choice; manually varying the perspective attitude would be rather complex. The possibilities for these subjective annotations thus depend on the type of annotations and the pictures.
A painting (collection) can be visually analyzed by humans and computers. Studying a single work can be called close viewing while the comparison between works is called distant viewing [2]. While many digital art historians use computer annotations, our study was based on human annotations. The advantage of computer annotations is the scalability. The advantage of humans is that they can be relatively easily instructed and that they display individual differences, which are not to be found in computational approaches. Unfortunately, the two experiments reported here span only a relatively small space of what is possible when we combine perception experiments with the study of pictures. Nearly every existing visual paradigm can be translated towards a depiction-centered instead of a vision-centered research question. It seems an exciting new research area that appears rather timely: online experiments are mainstream, and online art collections grow every year. Furthermore, we are convinced that meaningfully pictorial research cannot do without human annotations, however advanced computer annotations may become.
The two experiments we conducted investigate pictorial characteristics of paintings that are often cataloged in databases. The (computational) analysis of image corpora can make use of both image data and metadata. The latter is created through annotations by art professionals and mostly concern objective, non-visual information such as the maker, medium, and provenance. Yet, many museum collections have been using their own archival methods, making comparison across collections challenging. This will change in the near future as initiatives like the International Image Interoperability Framework [38] start to gain traction. These new methods allow for a wider range of metadata such as user-generated annotations. In this perspective, any type of pictorial research related perception experiment can be added to these metadata. Being aware of other perception studies or annotations conducted on an image or image collection could result in unprecedentedly linked pictorial research.
Acknowledgment
This work is part of the research program Visual Communication of Material Properties with project number 276-54-001, which is financed by the Dutch Research Council (NWO). Furthermore, the authors would like to thank Prof. Dr. Joris Dik, who introduced them to many intriguing art historical questions of which the Drapers’ Guild is merely one.
References
1ArnoldT.TiltonL.2019Distant viewing: analyzing large visual corporaDigit. Scholarship Humanit.34i3i16i3–i1610.1093/llc/fqz013
2ArnoldT.TiltonL.Distant Viewing: Computational Exploration of Digital Images2023MIT PressCambridge, MA
3BellS.UpchurchP.SnavelyN.BalaK.2013OpenSurfaces: A richly annotated catalog of surface appearanceACM Trans. Graph.321171–1710.1145/2461912.2462002
4BrandhorstH.2013Aby Warburg’s wildest dreams come true?Vis. Res.29728872–88
5BuhrmesterM.KwangT.GoslingS. D.2011Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data?Perspect. Psychol. Sci.6353–510.1177/1745691610393980
6BuhrmesterM.KwangT.GoslingS. D.2011Amazon’s mechanical Turk: A new source of inexpensive, yet high-quality data?Perspect. Psychol. Sci.6353–510.1177/1745691610393980
7ChadwickA. C.KentridgeR.2015The perception of gloss: A reviewVis. Res.109221235221–3510.1016/j.visres.2014.10.026
8ChenW.QianS.FanD.KojimaN.HamiltonM.DengJ.Oasis: A large-scale dataset for single image 3D in the wildProc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR)2020IEEEPiscataway, NJ
9ChouJ.-P.StorkD. G.2023Computational tracking of head pose through 500 years of fine-art portraitureElectron. Imaging351131–1310.2352/EI.2023.35.13.CVAA-211
10CrumpM. J.McdonnellJ. V.GureckisT. M.2013Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral researchPLoS ONE810.1371/journal.pone.0057410
11DaiA.ChangA. X.SavvaM.HalberM.FunkhouserT.NießnerM.ScanNet: Richly-annotated 3D reconstructions of indoor scenesProc. IEEE Conf. on Computer Vision and Pattern Recognition2017IEEEPiscataway, NJ582858395828–39
12DavenportJ. L.PotterM. C.2004Scene consistency in object and background perceptionPsychol. Sci.15559564559–6410.1111/j.0956-7976.2004.00719.x
13EcclesK.GregA.Your paintings tagger: Crowdsourcing descriptive metadata for a national virtual collectionCrowdsourcing our Cultural Heritage2016Routledge185208185–208
14GingoldY.ShamirA.Cohen-OrD.2012Micro perceptual human computation for visual tasksACM Trans. Graph.311121–1210.1145/2231816.2231817
15GoodmanJ. K.CryderC. E.CheemaA.2013Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samplesJ. Behav. Decis. Mak.26213224213–2410.1002/bdm.1753
16GrahamD. J.RediesC.2010Statistical regularities in art: Relations with visual coding and perceptionVis. Res.50150315091503–910.1016/j.visres.2010.05.002
17GryaditskayaY.SypesteynM.HoftijzerJ. W.PontS.DurandF.BousseauA.2019OpenSketch: A richly-annotated dataset of product design sketchesACM Trans. Graph.3823210.1145/3355089.3356533
18HaghiriS.RubischP.GeirhosR.WichmannF.von LuxburgU.“Comparison-based framework for psychophysics: Lab versus crowdsourcing,” Preprint, arXiv:1905.07234 (2019)
19HollinkL.SchreiberA. T.WielemakerJ.WielingaB. J.Semantic annotation of image collectionsKnowledge Capture 2003–Proc. Knowledge Markup and Semantic Annotation Workshop2003Vol. 2414841–8
20JohnsonM. K.StorkD. G.BiswasS.FuruichiY.2008Inferring illumination direction estimated from disparate sources in paintings: an investigation into Jan Vermeer’s girl with a pearl earringProc. SPIE6810164175164–75
21KoenderinkJ.WijntjesM.van DoornA.2013Zograscopic viewingi-Perception4192206192–20610.1068/i0585
22KoenderinkJ. J.van DoornA. J.KappersA. M. L.1992Surface perception in picturesPerception Psychophysics52487496487–9610.3758/BF03206710
23KoenderinkJ. J.van DoornA. J.KappersA. M. L.ToddJ. T.2001Ambiguity and the ‘mental eye’ in pictorial reliefPerception30431448431–4810.1068/p3030
24KovacsB.BellS.SnavelyN.BalaK.Shading annotations in the wildComputer Vision and Pattern Recognition (CVPR)2017IEEEPiscataway, NJ
25LiuS.ChenC.LuY.OuyangF.WangB.2018An interactive method to improve crowdsourced annotationsIEEE Trans. Vis. Comput. Graphics25235245235–4510.1109/TVCG.2018.2864843
26ManovichL.Cultural Analytics2020MIT PressCambridge, MA
27MarlowP. J.KimJ.AndersonB. L.2012The perception and misperception of specular surface reflectanceCurr. Biol.22190919131909–1310.1016/j.cub.2012.08.009
28McCarthyL.ReasC.FryB.“Getting Started with P5. js: Making Interactive Graphics in JavaScript and Processing,” (Maker Media, Inc., San Francisco, 2015)
29MotoyoshiI.MatobaH.2012Variability in constancy of the perceived surface reflectance across different illumination statisticsVis. Res.53303930–910.1016/j.visres.2011.11.010
30MuthC.HesslingerV. M.CarbonC.-C.2015The appeal of challenge in the perception of art: How ambiguity, solvability of ambiguity, and the opportunity for insight affect appreciationPsychol. Aesthetics, Creat., Arts920610.1037/a0038814
31OostermanJ.BozzonA.HoubenG. J.NottamkandathA.DijkshoornC.AroyoL.LeyssenM. H.TraubM. C.Crowd versus experts: nichesourcing for knowledge intensive tasks in cultural heritageProc. 23rd Int’l. Conf. World Wide Web2014567568567–8
32PaolacciG.ChandlerJ.IpeirotisP. G.2010Running experiments on Amazon Mechanical TurkJudgment Decis. Mak.5411419411–910.1017/S1930297500002205
33PellaciniF.FerwerdaJ. A.GreenbergD. P.Toward a psychophysically-based light reflection model for image synthesisProc. 27th Annual Conf. on Computer Graphics and Interactive Techniques2000ACM Press/Addison-Wesley Publishing Co.USA556455–6410.1145/344779.344812
34ReasC.FryB.Processing (A Programming Handbook for Visual Designers and Artists)2007The MIT PressCambridge, MA
35RogersS.1996The horizon-ratio relation as information for relative size in picturesPercept. & Psychophys.58142152142–5210.3758/BF03205483
36RussellB. C.TorralbaA.MurphyK. P.FreemanW. T.2008LabelMe: a database and web-based tool for image annotationInt. J. Comput. Vis.77157173157–7310.1007/s11263-007-0090-8
37SchreiberG.AminA.AroyoL.van AssemM.de BoerV.HardmanL.HildebrandM.OmelayenkoB.van OsenbruggenJ.TordaiA.WielemakerJ.WielingaB.2008Semantic annotation and search of cultural-heritage collections: The multimediaN E-culture demonstratorJ. Web Semant.6243249243–910.1016/j.websem.2008.08.001
38SnydmanS.SandersonR.CramerT.The international image interoperability framework (IIIF): A community & technology approach for web-based imagesProc. IS&T Archiving 20152015Vol. 2015IS&TSpringfield162116–21
39ToddJ. T.OomesA. H.KoenderinkJ. J.KappersA. M.2001On the affine structure of perceptual spacePsychol. Sci.12191196191–610.1111/1467-9280.00335
40UpchurchP.SedraD.MullenA.HirshH.BalaK.Interactive consensus agreement games for labeling imagesProc. AAAI Conf. on Human Computation and Crowdsourcing2016Vol. 4239248239–4810.1609/hcomp.v4i1.13293
41Van SchendelA.1956De schimmen van de Staalmeesters: Een röntgenologisch onderzoekOud Holland711231–2310.1163/187501756X00019
42van ZuijlenM. J.LinH.BalaK.PontS. C.WijntjesM. W.2021Materials in paintings (MIP): An interdisciplinary dataset for perception, art history, and computer visionPlos One16e025510910.1371/journal.pone.0255109
43van ZuijlenM. J.PontS. C.WijntjesM. W.2020Conventions and temporal differences in painted faces: A study of posture and color distributionElectron. Imaging2020267-110.2352/ISSN.2470-1173.2020.11.HVEI-267
44WendtG.FaulF.EkrollV.MausfeldR.2010Disparity, motion, and color information improve gloss constancy performanceJ. Vis.10777–10.1167/10.9.7
45WieserC.BryF.BérardA.LagrangeR.ARTigo: building an artwork search engine with games and higher-order latent semantic analysisProc. AAAI Conf. on Human Computation and Crowdsourcing2013Vol. 1152015–2010.1609/hcomp.v1i1.13060
46WijntjesM. W. A.2021Shadows, highlights and faces: the contribution of a ‘human in the loop’ to digital art historyArt & Perception9668966–8910.1163/22134913-bja10022
47WijntjesM. W. A.2014A new view through Alberti’s windowJ. Exp. Psychol.: Hum. Perception Perform.4048810.1037/a0035396
48WijntjesM. W. A.PontS. C.2010Pointing in pictorial space: Quantifying the perceived relative depth structure in mono and stereo images of natural scenesACM Trans. Appl. Perception (TAP)7181–810.1145/1823738.1823742
49WillatsJ.Art and Representation: New Principles in the Analysis of Pictures1997Princeton University PressPrinceton
50YarbusA. L.Eye Movements and Vision (Springer, 1967/2013)