Back to articles
Regular Articles
Volume: 4 | Article ID: jpi0139
Image
Artistic Style Meets Artificial Intelligence
  DOI :  10.2352/J.Percept.Imaging.2021.4.2.020501  Published OnlineMarch 2021
Abstract
Abstract

Recent developments in neural network image processing motivate the question, how these technologies might better serve visual artists. Research goals to date have largely focused on either pastiche interpretations of what is framed as artistic “style” or seek to divulge heretofore unimaginable dimensions of algorithmic “latent space,” but have failed to address the process an artist might actually pursue, when engaged in the reflective act of developing an image from imagination and lived experience. The tools, in other words, are constituted in research demonstrations rather than as tools of creative expression. In this article, the authors explore the phenomenology of the creative environment afforded by artificially intelligent image transformation and generation, drawn from autoethnographic reviews of the authors’ individual approaches to artificial intelligence (AI) art. They offer a post-phenomenology of “neural media” such that visual artists may begin to work with AI technologies in ways that support naturalistic processes of thinking about and interacting with computationally mediated interactive creation.

Subject Areas :
Views 316
Downloads 85
 articleview.views 316
 articleview.downloads 85
  Cite this article 

Suk Kyoung Choi, Steve DiPaola, Hannu Töyrylä, "Artistic Style Meets Artificial Intelligencein Journal of Perceptual Imaging,  2021,  pp 020501-1 - 020501-14,  https://doi.org/10.2352/J.Percept.Imaging.2021.4.2.020501

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2021
  Article timeline 
  • received July 2020
  • accepted June 2021
  • PublishedMarch 2021

Preprint submitted to:
jpi
Journal of Perceptual Imaging
J. Percept. Imaging
J. Percept. Imaging
2575-8144
Society for Imaging Science and Technology
1.
Introduction
This article proposes that the poetics of computational “artificial intelligence” (AI) in art media offers a qualitative window on how creative praxis is transformed by computational mediation. We present a phenomenology of an emerging creative ecology we call “neural media.” Our motivations here are not primarily concerned with particular technical implementations; we rather wish to take a perspectival turn from the technical to the aesthetic.
1.1
What is “Neural Media?”
The technologies that are employed in “AI art” derive from machine learning [41] and deep artificial neural networks, in particular [20, 35]. We will identify artistic interaction with such technology as neural media praxis. As we shall show, neural media disrupts traditional notions of artistic intent, offering instead a reflexive mediation of human presumptions about what it means to see and create when artistic intention is disrupted by the opaque algorithmic translation of AI mediation. Our interest here is in the qualitative distinction between embodied autographic (“traditional” or “tactile”; see also [8, p. 25]) and computational algorithmic (“AI”) creative practices.
Philosopher Joseph Margolis provides a strong argument for artworks being “physically embodied (by “embodied” Margolis is referring to an artefact as “possessing physically perceptible properties in their earthly instantiations”) and culturally emergent entities” [36, p. 68], denotations in the world of perceptible properties, objects that nonetheless lack fixed (invariant) nature. Margolis argues that “what we take the stable denotation to be is proposed, revised and entrenched in the interpretive process itself” [p. 95]. An artwork is a conceptual blend of the discursive and non-discursive thing, a metaphoric gestalt conflating the material artefact with its temporal reading, a suggestion of immanent meaning constituted in a transient representation devoid of any theoretical genotype or universal. Margolis [p. 67ff] thus holds that the work of art is a cultural construction with intentional referent and predicative dimensions. Taking this framework as informative upon the hermeneutics of art in the era of artificial intelligence, we seek to show how neural art is modeled after but ends up challenging assumptions about our relationship with our tools of expression. Algorithmic code presupposes repeatability, an anticipation of the mathematical invariance, but what emerges from neural media praxis is opaquely tied to the subjective intentionality embedded in the artist–technology relation, a stochastic denotation in an aesthetics of ambiguity. Neural media does not offer us singular referents but rather the intentional indeterminacy of potential images [14].
Our strategy in examining this reification of the artwork is to explore the metaphors drawn from practices of autographic image making and speculate on predicate associations with entities in the neural media ecology.
1.2
Motivation
William Seeley, questioning the epistemological constructs of cognitive science of aesthetics, draws from Baumgarten [11] that aesthetics must motivate the clarification of the structure of one’s phenomenal experience through “the intuitive apprehension of the latent structure of an artwork” [57, p. 211], but that, problematically, functional descriptions of perceptual stimuli—as is typical of computational models of neuroaesthetic perception—do not explain how artwork generates critically subjective aesthetic interest. Seeley proposes that an accounting of the relation between perception and semantics (apprehensive relevance) is necessary to develop naturalized models of computational aesthetics. In line with Seeley’s underlying message, that understanding in art and science is furthered by a sharing of perspectives, Johan Wagemans [66] has suggested that the interdisciplinary nature of empirical aesthetics benefits through a collaboration between artists and psychophysics researchers that “tap[s] into [the] more private, subjective aspects of aesthetic experiences” in order to “tackle research questions of joint interest in ways that are found acceptable by both, and yielding results that are both scientifically and artistically valuable and useful” [p. 673]. This is the strategy entered into in this article, to know more about the psychology of aesthetic perception, to learn more about the relation between intuitive experience and the empirical dimensions of perception, is to better understand how technical mediation of perception perturbs and interacts with creative intention. We can therefore use the technical parameters and associated aesthetic dynamics of mediated creative interactivity as a probe into the psychophysics of aesthetic space. In neural media, intuitive interaction is encoded—by design—into the artefact.
There has been little discussion of how technologies of machine learning mediate artistic practice (for notable exceptions, see [9, 60]). Yet, these technologies assume already a kind of alterity [28, p. 97ff] if we cannot explain the aesthetic response we are likely to obtain when engaging with them. The artist learns from the machine, which has, in turn, learned from visual databases intended to represent a corpus of human visual experience. This existential relation embedded within the manipulation of images by artificial intelligence promotes creative practices that are subjectively motivated but ambiguously mediated. We want to ask, what artistic response this dynamic encultures, and what begins to emerge as the experiential features of post-phenomenological aesthetics?
2.
Part 1 - The Emergence of “Artificial Art”
“…more and more people will get their kicks from Disneyland…” American abstract expressionist painter Adolph Gottlieb, 1973 [58].
There is today, emerging in the visual arts, a medium of creative expression that is at once schematic and stochastic, conflating what Gombrich [18] calls “representation” with what is referred to in art as “abstraction” [27], [43, p. 7], [19, p. 9]. “Artificially Intelligent art” [2, 3], [40, p. 59], or as we will call it here, neural media, enables the rearrangement of featural dimensions of artistic process. Mathematical models loosely representing neurobiological correlates of human vision [32] are exposed to mediated interactivity. Neural media embed representation within abstraction in a convolutional information exchange that is data present yet perceptually indeterminate. Thus, source imagery influences results but in indeterminate ways. Results are often strangely divorced from sources. A computational metamorphosis of the image derives from high-level statistical sampling and convolution of input data. Human semantics are reframed as algorithmic potentialities.
Placing this emergent media in the context of art requires at first an understanding of the media’s origins and affordances. Artificial Intelligence [17, 56] is the research discipline originating the technology of artificial neural networks, and it is from these networks that “neural network art” has emerged as a special application of visualization and categorization algorithms (see for example [42]). That this medium affords conceptual ambiguity, a generative admixture of representation and abstraction, calls us to ask what art becomes when the image is programmatically mediated by cultural biases toward representation. When the media of art practice are grounded on the notion of database categorization, are we moving toward a more culturally determinate art and away from its historically tacit embodiment in the individual practitioner?
Art historian Ernst Gombrich has argued that the natural development of art is toward the perfection of an illusion meant to sustain particular cultural representations [18, p. 25, 292, 313]. This drive is supposed to have led to a series of technical advances which have served to create the history of art. This cultural process is further said to rely on a process of experimental “making” and conjectural “matching.” The distinction between “making” and “matching” is presumed to exist along a dynamic polarity ranging from subjective to schematic representation. “Making” involves choices and as such is expressive, “matching” involves finding correspondences with embodied and cultural schema and is called by Gombrich, “representation.” Gombrich further uses the term “illusion” to identify the unrecognized application and acceptance (habituation tempered by cultural constrictions) of pre-existing “mental sets” to induce representation. By this metric, at least without critical analysis (as cannot be relied upon in the mass distribution of the image in visual culture), representation essentially becomes illusion.
The year Gombrich’s A.W. Mellon Lecture Art and Illusion was delivered (1956) was also the year of the Dartmouth workshop, generally recognized as the inaugural moment of the research field of artificial intelligence [68]. Contained in the proposal for the workshop is the interesting acknowledgment of the relationship of creativity and randomness. The suggestion there, that “ the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness [which] must be guided by intuition” [37] seems to speak for the Zeitgeist of an age, a time when art and science were crossing radical thresholds. As Thomas Kuhn has observed “novelty emerges only with difficulty, manifested by resistance, against a background provided by expectation” [33, p. 64]. It is particularly interesting to note that at this cultural juncture of the image turning to ambiguity, a stochastic impulse is embedded in the Zeitgeist of a scientific and cultural revolution that resonates to this day.
Subsequently, stochastic processes, the algorithmic modeling of randomness, have become fundamental for modeling a wide range of simulations of natural phenomena (see for instance [26, 34, 62]) and are fundamental to backpropagation, the deep learning method employed in computational image recognition [55]. The historical origins of these technologies are therefore concurrent with the equally revolutionary movements in the other half of “the two cultures” [59] as artists in the 1950s and 60s turned to (re)presenting and questioning complex subjective states, tacit dimensions and social frameworks.
In the 2003 A. W. Mellon lecture, Pictures of Nothing: Abstract Art Since Pollock, Kirk Varnedoe [39] framed a counterpoint to Gombrich, making an argument for abstraction as a “new language” relying precisely on the blending of multiple subjective perspectives [64, pp. 270–272]. Exchange between apparent polarities, such as the “rift” between modernism and postmodernism (p. 7), or as Varnedoe argues (p. 241), distinctions between illusion and abstraction, are now brought into question. Abstract artists of the 50’s and 60’s were motivated to set aside convention, instead foregrounding experience and questioning the blurred and problematic constructs motivating the cultural schematics of representation [27, p. 157], [64, p. 43, 248]. Motivated by a general disillusionment with historical tropes of idealism and tradition [19, p. 65], artists sought new forms of expression, new ways of communicating. Therefore, in the mid 1950s, a new kind of cultural abstraction emerges along with “a new kind of challenge to, or resistance against, its premises” [65, 13:51–14:03], an emergence which grounded and continues to inform the image and its interpretation in contemporary culture and creative practice, and which plays into neural media in surprising and revealing ways. Neural media offers to the visual arts what Umberto Eco [12] has called the open work—it is refashionable, unfixed, and parametric; a set of possibilities more than a picture.
The most distinguishing feature of current AI deep learning [20, 35] technologies is that they learn by example, unlike previous computational methods that relied on writing code to model human expertise and knowledge. In deep learning, the neural networks and the training algorithms are relatively generic, applicable to multiple domains. This technology employs neural networks which are trained to recognize patterns in data. Data can be anything that can be expressed numerically: text, images, measurements, video, sound. From images we can get information about content, texture, and style, and segment the image into various elements.
This kind of network is like a funnel, it takes an image as input, detects patterns at multiple levels, accumulating increasingly refined patterns until we arrive at human interpretable semantic associations: “a cat on a mat,” and other more scientifically and aesthetically relevant derivations. Inverting this funnel allows for the creation of synthetic images. This process is not trivial, however, and is, in fact, much more difficult and resource-demanding than extracting information from images. The complexity of this synthesis leads to implicit statistical ambiguity, and it is the ambiguous nature of these synthetic images that affords rich aesthetic association. In perception science research, Wang et al. [67] have shown a relevant correlation between ambiguity in abstraction and widened cognitive association.
Here, we claim that AI media exposes and disrupts the ambiguous relationship of representation and abstraction in art, and that this ambiguity is a central feature of this emergent new media. Furthermore, this disruption bifurcates from already established artistic deconstruction and reconstruction of the image, where “images scatter into data, data gather into images” [15], precisely because in neural media the process itself—not just the perceptual image surface—enters into ambiguity. The questions we want to ask here are to do with what happens to art when tradition is appropriated and mediated by artificial intelligence? What do artists think about, when they think about artificial art?
3.
Part 2 - Materials and methods: The Neural Media Process
3.1
Three Experiential Reports from Neural Media Praxes
Now we will turn from our theoretical stance so far and explore the phenomenology of working with neural media in our individual practices. We attempt to arrive at a deeper understanding of creative expression with neural networks through artistic illustration of the media’s algorithmic constructs. This section therefore takes an autoethnographic turn as we delve into the personal experience of AI art, exploring the creative affordances of artificial intelligence. The questions we ask ourselves are how does one work with this media? What marks can we make? What stories can we tell?
3.2
Suk Kyoung Choi
I am a traditionally trained painter interested in processes of abstraction. In these works, I am using a method known as style transfer to explore the intersection of visual form and aesthetic space (Figure 1) in an image-making process mediated by artificial intelligence. This practice currently consists of two-dimensional images and video animations created with proprietary extensions [38] to the Gatys et al. [16] style-transfer algorithm. I call these images “neural paintings.”
Figure 1.
“Postcards from an other” (Choi 2018). A plot of content weight against style scale. The artist created many of these studies to make initial determinations of the personally resonant parameter space of the style-transfer algorithm. See https://sukkyoungchoi.com/2021/02/02/jpi/ for better resolution.
I started by experimenting with grids of subtle parameter incrementation across a range of image scales (Figure 2), exploring the notion of infinitesimal change in the still image. Small visual changes invoke depth perception, inducing pareidolia through resonance with embodied perceptual gestalts originating in genetic/cognitive ties (for theories in accord with my notion of resonance, where affective meaning is linked to determinacy, see [54] and [44]). So what style transfer was designed to do—pastiche representation of artistic style—is circumvented by an artistic intention that sits outside the singular image. I explore the aesthetic and psychophysical aspects of this notion through an art-as-research methodology.
Figure 2.
Parameter incrementation across a range of image scales (image by Choi 2019).
The artist’s brush is a responsively variant technology that becomes embodied, familiar, and ultimately transparent in the act of painting. I wanted to explore how the “neural brush” behaves in the formation of evocative texture. But it is not like physical painting or collage, the image is not manipulable in a semantic or tactile sense. There is a very expressive element, not entirely under control but directable: more like the implicit techniques of art automatism, such as frottage (Choi is referring to a method of ‘autographic sampling’ obtained by taking transfer rubbings off surfaces. The technique was developed in 1926 by surrealist Max Ernst [61, p. 12], [13, pp. 7–14], and later extended into grattage, a method of scraping away a surface to discover the serendipitous layers below—a technique that it could be easily argued was widely adopted by American Abstract Expressionist painters in the 1950s.) than the explicit imagery usually associated with collage. Imagery hovers on the edge of discernment (Figure 3).
Figure 3.
Changes to the source image results in only partially predictable change to the result (image by Choi 2019).
In my work with style transfer I think of “style” as a paint brush of variable scale, loaded with a somewhat randomized palette distributed across a range of blob sizes as if multiple operations of dabbing and swirling the brush quickly across a range of image colors had happened, which is then applied stamp-like by convolving the (computational) brush used to capture the style of the original content with the brush representing the “target” style to produce a new set of color patches for the content, which is itself weighted positively or negatively in its influence over the combined output (Figure 4). The composition is drawn out of latent potentiality more than explicit intention. In other words, I do not so much place the brush as define a spatial texture palette in a process of art automatism. Thus, “style” and “content” represent the intersection of different conceptual domains; “style” has more to do with my feeling about the work, whereas “content” reflects orientation toward lived space. (This dissociation between dimensions of cognitive processing is also reflected in perceptual microgenesis—the time course of perception. Research in behavioral [4] as well as neurophysiological studies [5] shows that the embodiment of spatial references happens very quickly, whereas style develops secondarily and is an entrained awareness. In other words, aesthetic perception is emergent.)
Figure 4.
Choi’s conception of the “neural brush,” where the parametric control of the multiple style sources act as palette components. This is, of course, a representation of processes that are entirely algorithmic, not tactile as the (traditional autographic) metaphor would suggest (illustration by Choi 2019).
I do not think of this media as offering “control” so much as an interactive “play.” AI should provide an augmentation of human creative intention and for that we do not so much need control as transparency on a set of rules by which the game is played. A central concern in art making for me is the derivation of your own set of rules —perhaps this is what “style” is. The realization of intentional acts comes through skillful play. The tool needs to be “felt” more than controlled.
My neural painting process involves constructing elaborate digital abstract paintings that I think of as “multi-perspectival” visual experiences delineated into “organic” and “deterministic” workspaces. I populate these workspaces with various existential referents, the former entangled with the aesthetics of brain, flesh, and the metaphor of natural environments reflecting lived body worlds, the latter with mechanically contrived surfaces, industrial, electronic, and other impersonal grid-like structures. These two polarities are convolved against each other to arrive at liminal blended spaces where the distinction between human and machine “intention” becomes vague (Figure 5).
Figure 5.
“Untitled code” (Choi 2019). “The distinction between human and machine ‘intention’ becomes vague.”
3.3
Steve DiPaola
As an artist and cognitive scientist, I am interested in human creativity and its relationship with emotion. In making AI artwork, I find interesting irony in what I call “the ghost in the machine.” That is, although I know I am dealing with a “cold” machine, I think there is a possibility of bringing the spirit—the emotion—out of it, and I like to attempt to do that by making what I call “creativity systems” that move through a visual search space in some creatively inspired way. I am interested in latent space search strategies that reflect constructs drawn from cognitive psychology, the notion of creativity involving both analytical and associative thinking. At some point analytical process gets caught, and in getting caught one has to let go, defocus, go on an associative journey. I think that a lot of artists have this notion of fluidity between stepwise thinking and being very open and loose.
In the case of Deep Learning systems, this latent space search, this journey, is in iterative code form where I get an idea, and then using code scripts, try moving 10% in every direction from it, or in directions I intuit are the best way for me. I am looking to find new associations and new neighborhoods of interest. In Deep Learning, you can really play with these processes in deep ways by changing systems of relations.
I like to think about human creative process and try and emulate that in computational art systems, so we think about, let us say, a portrait painter: The first step would be to interview and understand a sitter, and to emulate that I use many interconnected AI systems, each acting as a subcomponent of a greater gestalt. Some systems might represent the emotional recognition of your sitter, then another system might segment subject from background, trying to derive form and meaning, and via conversation and research you would have knowledge of the sitter you would want to convey. The artist takes all this and emphasizes what is important and abstracts out the things that are less important to get at the inner beauty of the portrait. So, the notion of AI variable abstraction techniques in different regions helps bring out that inner truth to the digital canvas (Figure 6).
Figure 6.
AI variable abstraction techniques in AI portraiture (image by DiPaola).
I like the fact that a painting is not one thing anymore, that I could decide to apply different perceptual levels and abstraction levels—painting becomes multi-dimensional and stochastically variable, entering into an intersecting space of conceptual and algorithmic association. For instance, a particular 360 viewpoint of me in a landscape (Figure 7) leads to a journey of playing with shape in new ways.
Figure 7.
A “multi-dimensional and stochastically variable” painting (image by DiPaola).
These trees can morph into this conceptual representation of the room that Van Gogh lived in (Figure 8) by emphasizing what I like most. Much of this can become more immediate, letting go to move more freely and intuitively.
Figure 8.
Multi-dimensional morphing into Arles (image by DiPaola).
3.4
Hannu Töyrylä
When transformative “Generative Adversarial Networks” (GANs) [63],1
1
The term “GAN” is used in two importantly different ways: 1. A training architecture including an adversary to evaluate the results. 2. A model, usually a generator, trained using such an architecture, e.g. BigGAN. Furthermore, the use of GAN is not limited to generative (vector to image) applications. For instance, Pix2pix, CycleGAN, and BiCycleGAN are GAN architectures for picture-to-picture transformations.
starting with pix2pix [30], became available, I began to experiment with them. I developed a process I call “virtual printmaking” (Figure 9); transforming a photo into bare contours, a “threadframe,”2
2
Töyrylä describes ‘threadframe’ as derived from ‘wireframe’ but in his work the process of extracting the contours in the image results not in clean structural lines but in ‘woolen looking lines’ capturing the representational contours of the source image. More on this technique: http://liipetti.net/erratic/2018/01/26/art-printmaking-but-neurally/
used as a virtual printing plate with a pix2pix transformative model filling in color and texture according to what it had learned from my data set (Figure 10). Here my motivation was an urge to simplify, reduce to bare essentials, toward the abstract while maintaining a degree of control over composition.
Figure 9.
Threadframe virtual printmaking. A transformative GAN is trained with image pairs to fill a “threadframe” with color and texture. Virtual prints are then made by reducing photos to threadframes out of which the GAN makes a virtual print (image by Töyrylä 2018).
Figure 10.
“On a black river” (Töyrylä 2017). A photo is transformed through an intermediary contours phase into a landscape, to which a figure from another photo is manually added.
I then turned to experimenting with generative GANs [21], which are usually trained with very large data sets in order to create a generator capable of producing a large variety of images. Such a variety runs counter to my intentional needs, as finding relevant images would still be problematic. To have more control, I started to experiment with small data sets (Figure 11). This allows me to direct the process through a careful selection of the images used in training. There is no clear separation between content and style; the generated images contain elements of style and content present in the training set, combined in often unexpected ways. The ambiguity of this separation is a useful feature in my process, as I am not after photorealistic results (Figure 12).
Figure 11.
Training of a generative GAN, with small, focused data set (Middle Eastern scenery). Generated images have a specific quality without being obvious copies of the originals (image by Töyrylä 2019).
Figure 12.
Training of a generative GAN, with a small, focused data set: semi-abstract indoor photos with emphasis on colors, lights, and shadows (image by Töyrylä 2019).
A major difficulty in training GANs is “mode collapse”: the network fails to produce variety. I no longer regard this as a problem; it is rather a characteristic of my process. At any given phase during training, there is only little variation, but as the training goes on, I am still getting different images. This is but one example of me, as an artist, pushing the technology in different directions than what the scientists do: toward the more specific, individual, private, while science is looking for the generally valid.
My technical background has certainly helped me in developing my tools and process, but my reasons for entering into visual art were related to expression, not technique. There is an ongoing tug-of-war between the range of my aesthetic preferences and the aesthetic range that a neural art process can provide. Techniques are but the tools of a process. Expression drives the exploration of the space between representation and abstraction.
4.
Part 3 - Discussion: A Phenomenology of Neural Media
We will now discuss the implications of observations on results obtained in our artistic experimentation with neural media in creative praxis, focusing on the transformation of autographic metaphor when expressed through algorithmic media. A medium transforms and disrupts the immediacy of tacit expression. The intention must pass through some technology in the process of manifestation of the object of art and this “passing through” presents a distancing from interactive immediacy (Figure 13).
Figure 13.
Intention must pass through some technology in the process of manifestation (© Choi, 2017).
4.1
A Picture of Process
We have presented reflections on approaches to neural media in the work of three artist researchers. We claim that threaded through all these approaches there is, as per Ihde, a sense in which the artist’s embodiment of an image-making practice is both “magnified” and “reduced” [28, p. 76] by AI technology. Here we discuss interrelated factors threaded through our exemplar neural media praxes, examining how the art object in neural media is variously polarized by magnification and reduction, lending a particular “fingerprint” to a typology of neural media praxis. In this analysis, we interpret these artists’ reflections on neural media praxis in a phenomenology of the mediation of traditional process metaphors.
Relations between the tacit and the expressed are explicitly embedded in neural media. Future developments of AI creativity support technology should be cognizant of the system of deep relations engaged in the formation of neural media artefacts. Such relations might be modeled as n-dimensional Evolving Transformation Systems [22], a mathematical model used to model natural processes, and by Nadin [47, 48] to model anticipatory systems. Another possible approach to modeling this autographic–algorithmic exchange is Predictive Coding theory (see Clark [10]), which has been shown by Muth & Carbon [45] and Muth, Hesslinger, & Carbon [46] to model predictive progress toward insight in the resolution of “semantic instability” in ambiguous visual experience. The praxis of neural media is intimately tied to tacit motivation and anticipatory meta-prediction on context-related future conditions and could provide valuable empirical data to perceptual cognitive science research if such process relations could be reflexively traced through the multivariate factors that inform them. Our approach here is to phenomenologically unravel these factors in order to inform the development of such models.
4.2
Experiential Features of Artistic Engagement with Technology
A post-phenomenology of the neural art artefact presents several comparative dimensions which we may employ to arrive at a deeper understanding of how art changes when AI technology enters human praxis. Initially, we have the basic relational structure of Fig. 13:
Artist–technology–artefact (A–t–a)
In traditional practices, this triad is conceived of as a transparency relation between the artist and the tool; the brush or expressive instrument, to the degree that it is embodied, becomes “transparent,” receding from the interactive relation with the artefact:
(A–t)–a
On the other hand, in artificially mediated creative praxis “awareness” shifts from an embodiment of the tool (transparency) along a disembodying vector where the transparency is no longer tied to human intention but instead moves toward a conflation of tool and artefact. This conflation—a computational poetic gestalt—is not easily separated, and the artist is pushed further from the state of immediate interactivity toward an emergent awareness-through-technology a state of affairs that Ihde calls alterity relations. Interactive opacity is emphasized:
A–(t–a)
We argue that neural media separates from embodied praxis along certain phenomenological dimensions that suggest possibilities for creativity support development. Modeling the apparent barriers between the human and AI in creative praxis also raises both technical and ethical issues associated with real-time human-centered control of which area in an image is being developed; to enhance continuity of response in the pursuit of illusion.
Opacity, as positioned here, is then reflective of the ambiguous dichotomy of Gombrich’s “representational illusion.” As technology advances, the perfection of illusion becomes more and more representational of the environment (technical/cultural) from which it emerges. AI image enhancement and manipulation is advancing at a pace some might find concerning, and for good reason. If we are not cognizant of the degree to which technology has taken the place of creative intention, at what point have we become cybernetic curiosities in Gottlieb’s Disneyland? No longer tourists, but the commodity bought and sold?
4.3
Dimensions of Neural Media Praxis
Praxis —the reflexive pursuit of embodied expression through artefactual manifestation — displays several featural dimensions, which we identify as those aspects of an art process that constitute its intentional relations. A feature is not an invariant property of a process but is always an interaction for some intended result. In working with neural media, we observe two featural dimensions separated according to their general situatedness in the relations of creative praxis and phenomenologically distinct from the feature’s status in autographic media. These are,
“Attitudinal” dimensions: Motivation and Intention
“Predicative” dimensions: Limits and Constraint
4.3.1
Motivation and Intention (Attitudinal Dimensions)
Creative motivations are to a large degree pre-conceptual, formed from the artist’s lived experience. Thus, the artist’s choice of tool anticipates the result and poses an attitudinal stance toward some anticipated future condition or state to be captured in a manifested artefact. We observe this anticipatory behavior in our art-empirical explorations of how neural media behaves, from an interest in the formation of optimized texture weighting leading to an understanding of infinitesimal change in the still image (SKC), to an exploration of emergent behavior in evolutionary programming in understanding human emotion (SD), to a tailoring of technology in support of reflective sense-making (HT). These process metaphors acknowledge a motivational entanglement of artefact and artist.
The hermeneutic question becomes how much intentional control over the determinable interpretive properties of the artwork the artist holds in the creation of the neural painting. An artist’s desire to simplify, reduce to essentials, eliminate irrelevancy, is a move toward abstraction. Complexity, rich interpretive fields, identity, these are moves toward representation. Such motivations must take place in a latent intentional space, where a range of potentiality reflective of aesthetic, pragmatic, and algorithmic concerns ultimately describes the “style” of the artist–artefact interaction. The degree of intentionality in this space is therefore variant, depending on the artist’s perspectival stance within the human–technology relation.
4.3.1.1.
Intuition:
In creative praxis, we observe that intuition is frequently called upon. These hunches and spontaneous actions arise from the pre-conceptual subconscious–tacit knowledge [53, p. 4, 9–11] expressed apart from reasoning about expression. Even chance events, “mistakes,” may be perceived as useful perturbations of intent. The so-called Aesthetic “Aha!” moment often emerges from such intuitive reflection when the felt affordance of a percept emerges from the quality and extent of associative elaboration [44]. AI technologies however are entirely “reasoned” mediations; programmatic, algorithmic, and determinate. This sets up a dynamic opposition in neural media practice, a tug-of-war between a range of personal aesthetic preferences/intuitions and the possible aesthetic range that a neural art process can provide.
Thus, each of us relies on spontaneity as much as directedness toward goals and entities. We see this openness reflected in the adoption of methods supporting surprising combinatorial results (SKC), the serendipity afforded by emergent systems (SD), and the trade-off between neural architecture and subjective ambiguity (HT).
4.3.1.2.
Detachment:
Human activity embodies an implicit topological structure. We understand space by moving through it. This dimension of autographic praxis is critically sensorial (haptic, somatic, experiential, physical) versus the topology of computational space (experimental, conceptual, virtual). Detachment is a reflexive move; “stepping back” offers an inclusive separation from immediacy (SKC), “letting go” allows for an expansion of the subject’s perceptual context in order that associative thinking is stimulated (SD), and a deliberate distancing from end conditions may promote focus on process (HT). “Detachment” allows for a widening perceptual, emotive, and cognitive elaboration of one’s situated awareness, identified by Gustav T. Fechner (the founder of empirical aesthetics) in his Aesthetic Association Principle as “…the eminent role of personal recollection, Zeitgeist, and cultural background in the formation of aesthetic experiences” [51].
Detaching from engagement implies both temporal and physical displacement. Neural media however replaces the three-dimensionality of enduring physical acts with the binary instantaneity of the flat screen. One cannot —due to perspectival reflections of light in a natural environment and the essential dimensionality of the painted surface— mimic the tactile interaction of traditional painting in neural media praxis. The AI artist is placed in a technical landscape which requires a new kind of stepping away from intention to revise one’s knowledge about how to proceed; stepping back now involves a fundamental reframing of embodied topologies of experience. This reframing has also been observed in psychophysical studies by Belke, Leder & Carbon [6], who have shown that hypothetical symbolization enables a perceiver to decouple experience from immediacy and employ “elaboration-based mastering mechanisms” fostering opportunities for mental growth.
4.3.2
Limits and Constraint (Predicate Dimensions)
By predicate dimensions, we mean those features of praxis that establish the kinds of relations that may be made between entities in the neural media ecology. We have noted in our individual practices the creative affordance of boundaries: The realm of contextual possibility is enriched through intentional limitation. Umberto Eco describes this fundamental entanglement of limits and potentiality in The Open Work;
“(…) life in its immediacy is not “openness” but chance. In order to turn this chance into a cluster of possibilities, it is first necessary to provide it with some organization. In other words, it is necessary to choose the elements of a constellation among which we will then—and only then—draw a network of connections” [12, p. 116 emphasis ours]. This emergent network of associations drawn by the participating agent (constituting both the artist’s and the beholder’s share of the interactive and intersubjectively constructive perceptual experience) traces a path whose limits and constraints reflect the semantic instability [45] of the neural media artefact.
Orson Welles said that “The enemy of art is the absence of limitations” [31, p. 78]. Limits are conceived of as a restriction, yet the definition of limits is still intentional. The central feature here is an identification of the neural artefact as representing a set of “mediated constraints.” Intuitively, one might associate control of a process with clarity of expression; one needs to be able to control a technology or medium in order to express what one has to say. But in neural media practice this distinction seems less clear. Limitations are instead incorporated into the way one expresses. Thus, we have noted that neural media offers an expressivity that is “not entirely under control” but is “directable,” an interactive “play” aimed at uncovering a “set of rules” (SKC), or alternately an acceptance that painting becomes “multi-dimensional and stochastically variable” (SD), and that “creation often involves a gradual reduction of options” toward refinement of a subjective field (HT).
4.3.2.1.
Ambiguity and Explainability:
Algorithmic mediation of artistic intent in neural painting promotes a phenomenological displacement from the lived immediacy of interactive response. This is different from embodied (traditional autographic) practices, where interaction with an artefact is experienced directly, manifestly. In neural painting, the result of an expressive “gesture” becomes embedded in a network of algorithmic transformations, entering an interpretive hermeneutic space. Ishai, Fairhall, & Pepperell [29] show that although the interpretive response to visual stimuli is tied to semantic content, aesthetic judgment is independent of meaning and is influenced by formal visual features. Thus, neural media might be employed in psychophysics experiments to manipulate semantic instability [45] by modulating “hidden” images (through parameter variation of ambiguous and indeterminate visual features) and thereby examine resonant aesthetic response. In terms of the artist’s interaction with the developing artefact, this relation is already described in the algorithmic trace of the artist’s process of image development, and such dynamic creative vector spaces might in future be used to provide training data to collaborative AI technologies.
Neural media expose qualia that question the notion of perceptual determinacy. Results are often ambiguous yet sometimes produce extremely evocative surprisal. Process becomes a creative dialectic between visual experiences distanced by a “mediated unexplainability” —the notion that the alterity distinction granted to AI poses an ambiguously positioned existential other. That other enters into a collaborative relation where interactivity between artist input and reflexive response becomes only explainable through the ambiguous language of metaphor.
Neural media introduce a more phenomenologically “distanced” sense of surprise than does “felt” interaction. When the degree of control gets more indirect, when the element of ambiguous surprisal crosses a certain threshold, the artist begins to ask, “if the resulting picture feels exactly what I would have liked to paint, is it really my making?” In this light, the statement by French neural art-collective Obvious is interesting, they claim “(…) in contemporary art, the artist has always been at the center of the work, and the tool as a way for him to express and pass on emotions. Here, the tool is closer to the center of the work” [50]. Here, the embodiment relation has been offset toward interaction with a tool and away from direct interaction with the result.
4.3.2.2.
Training and Potentiality:
The limits of an artificially intelligent image space are determined by a neural network’s training. Training in neural media praxis suggests the establishment of a latent framework of possible outcomes, a process which involves “imprinting” high-dimensional patterns in the network architecture such that it can organize information in ways often unpredictable by human presumptions about source image relationships. Neural media thus reflects the implicit bias (weighting) established during training: Training directly (and implicitly) imposes creative constraints.
Training effectively sets up a pre-receptive field and may be employed by the neural artist to generate something very much like a “series space”, where aesthetically related potential iterations of an idea are presented. We observe that neural media may therefore emphasize the “exploratory/reflective” aspects of praxis, whereas traditional embodied praxis weighs toward directly observed “expressive” (humanistic) concerns. Thus, we argue that the latent potentiality of the pre-receptive field in the artistic process has again shifted toward an alterity relation, where creative interactivity moves away from expression toward reflection but a reflection that is implicitly mediated by technology. This shifting of hermeneutic perspective is shown in Table I.
Table I.
Perspectival shift in neural media process-curation.
Process phaseTraditional mediaNeural media
MotivationPreparing andPreparing and
curating (resources)curating (data sets)
InteractionTampering with theCurating the results
process and materials
Artefactual responseCurating the resultsTampering with the
neural network itself
No matter what core technologies plug into an art system, one dimension that clearly supports traditional artistic praxis would be the ability to (re)train networks of relationships on a more rapid and subjective level. Such models could capture user “process spaces” framing virtual “artist’s scrapbooks” of conceptual relations. Once such a set of relations is established, one needs to visualize “constellations” of ideas. Limited compositional spaces are by no means empty.
4.3.2.3.
Latent Space:
Once trained, a neural network contains a “latent” space defining the realm of immanent solutions (optimizations) given a set of inputs. The artist’s encounter with latent space is thus contingent upon the architecture of the neural network; transformative architectures take user chosen images and parameters as input, generative architectures take random numbers and previously encountered “way points” in latent space. Accordingly, neural artists may attempt to delimit the latent space itself by training with carefully “refined” data sets (HT—data mining of personal records) or attempting to mimic traditional praxis with the virtual equivalent of an artist’s journal (SKC—through sampling the experienced environment; SD—conceived of as journeys through semantic neighborhoods). The constraint of latent space represents a new creative paradigm: A “mental image” is drawn not from imagination or gesture, but from a computational matrix of association that is ambiguously transparent to the implementing artist.
5.
Conclusion
“I feel more at home, more at ease, in a big area.” Jackson Pollock [49]
Jackson Pollock, painting, steps back to see the big picture, to reflect on a gesture, to have a smoke. Machines cannot do this. The human artist embodies preference in a perspectival, personal fashion within an ontology fundamentally tied to movement in an organic, temporally constrained world. The price of this human subjectivity is an attachment to ideas and entities as imbued with something equally individual, some qualia.
The intent of this article has been to show that artistic praxis in neural painting shifts creative interactivity from the artist toward the artefact, conflating the tool with the result. In traditional art praxis the artist–artefact relation is tactile, embodied through direct interactive touch-formation and strongly coupled with creation by self. This sense of ownership is lessened in neural media, where the investment in process shifts from tactile space toward conceptual space as hand tools are exchanged for code conventions. When intentionality shifts in the direction of mediation by technology, proprioceptive interactivity is compromised. The artist may start to question self-authorship, asking “is this my work?”, “is this my voice?”—questions that entail personal history, causality, and perhaps ultimately the explainability of AI [1, 23, 25]. The human tacit dimension is seemingly as difficult to explain as the black box of artificial intelligence, thus, at their interstice, human intention, ability, and expression risk disruption in the translational encounter with AI technologies of expression; the metaphors of computationally mediated praxis simultaneously expand and limit human creative intention and interactivity. In this light, Gombrich’s thesis in The Sense of Order [24] is interesting. He argues that it is “the growing preference of many individuals for ‘more of this’ and ‘less of that’ that ultimately emerges as a trend, a fashion, and finally as a style.” What does this imply about the “style” of “artificial” art? “Style” is the way that a human artist interacts with a media or technology, but the implicit mediation of AI disrupts autographic intention with a layering of algorithmic opacity. Neural art thus inspires the questions “Can machines hold intention? Does the human–AI relation constitute a new form of intersubjectivity?”
The essential features of the digital artefact are not locative in the sense of the autographic—as a singular manifestation of a series of sensorimotor events. The artefact is rather a “constant stream of processing”  [7, p. 2] that is inherently reliant on an extended network of relations, resisting objectification [52, p. 1]. The reification of visual expression in neural media is paradoxically entangled with an ubiquitous absence of the singular referent. Instead, this emerging media traces an ambiguous line between predictability (the linearity of code versus the tactile interaction of traditional artistic practices), and unpredictability (the infinitely divisible subtlety of emergent patterns in complex data versus the intimate vaguery of psychological events). The neural artist situates praxis in a technologically augmented exploration of the relation between self and artefact. Reflecting Pollock, we might say that machines can offer us the latent wonder of the “big area”, but we cannot yet “feel more at ease” in it. Ownership of the creative process in neural media becomes a curatorial exchange at once subjective and externally mediated.
We are in the era of an art of alterity.
Acknowledgment
This research has been partially supported by Social Sciences & Humanities Research Council of Canada (SSHRC).
References
1AdadiA.BerradaM.2018Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)IEEE Access6521385216052138–6010.1109/ACCESS.2018.2870052
2AIArtists, The Top 25 AI Artists of 2019, AIArtists.Org., retrieved 6 September 2019
3ArcasB. A. y.Art in the age of machine intelligenceArtists and Machine Intelligence2016 https://medium.com/artists-and-machine-intelligence/what-is-ami-ccd936394a83
4AugustinM. D.LederH.HutzlerF.CarbonC.-C.2008Style follows content: On the microgenesis of art perceptionActa Psychologica128127138127–3810.1016/j.actpsy.2007.11.006
5AugustinM. D.DefranceschiB.FuchsH. K.CarbonC.-C.HutzlerF.2011The neural time course of art perception: An ERP study on the processing of style versus content in artNeuropsychologia49207120812071–8110.1016/j.neuropsychologia.2011.03.038
6BelkeB.LederH.CarbonC.-C.“When challenging art gets liked: Evidences for a dual preference formation process for fluent and non-fluent portraits,” PLoS One 10 (2015)
7BerryD. M.Critical Theory and the Digital2014Bloomsbury AcademicNew York and London
8CrowtherP.What Drawing and Painting Really Mean: The Phenomenology of Image and Gesture (Routledge, New York, 2017) https://doi.org/10.4324/9781315311852
9CrowtherP.Conditions of creativity: drawing and painting with computersin What Drawing and Painting Really Mean: The Phenomenology of Image and Gesture2017RoutledgeNew York, NY132153132–53
10ClarkA.2013Whatever next? Predictive brains, situated agents, and the future of cognitive scienceBehav. Brain Sci.36181204181–20410.1017/S0140525X12000477
11DaveyN.DaviesS.HigginsK. M.HopkinsR.SteckerR.CooperD. E.Alexander BaumgartenA Companion to Aesthetics2009John Wiley & SonsHoboken, UK404140–1162–163
12EcoU.The Open Work, translated by Anna Cancogni, (Harvard University Press, Cambrdge, MA, 1989)
13ErnstM.Max Ernst: Beyond Painting2009Wittenborn, SchultzNew York, NY(first published 1948)
14GamboniD.Potential Images: Ambiguity and Indeterminacy in Modern Art2002Reaktion BooksLondon
15GallisonP.LatourB.WeibelP.Images scatter into dataData gather into ImagesICONOCLASH: Beyond the image wars in science, religion, and art, ZKM Center for Art and Media2002MIT Press Karlsruhe300323300–23
16GatysL. A.EckerA. S.BethgeM.“A neural algorithm of artistic style,” arXiv:1508.06576 [Cs, q-Bio], 2015
17GhahramaniZ.2015Probabilistic machine learning and artificial intelligenceNature521452459452–910.1038/nature14541
18GombrichE. H.Art and Illusion: A Study in the Psychology of Pictorial Representation1989Princeton University PressPrinceton, NJ(first published 1960)
19GoodingM.Abstract Art2001Tate PublishingLondon
20GoodfellowI.BengioY.CourvilleA.Deep Learning2016MIT PressCambrdge, MA
21GoodfellowI. J.Pouget-AbadieJeanMirzaMehdiXuBingWarde-FarleyDavidOzairSherjilCourvilleAaronBengioYoshuaGenerative adversarial netsProc. IS&T’s NIP142014IS&TSpringfield, VA
22GoldfarbL.GayD.GolubitskyO.KorkinD.ScrimgerI.“What is a structural representation: A proposal for an event-based representational formalism (Sixth Variation),” Technical Report, Faculty of Computer Science, University of New Brunswick, (2007) http://www.cs.unb.ca/∼goldfarb/ETSbook/ETS6.pdf
23GoebelR.ChanderA.HolzingerK.LecueF.AkataZ.StumpfS.KiesebergP.HolzingerA.HolzingerA.KiesebergP.TjoaA. M.WeipplE.Explainable AI: the new 42?Machine Learning and Knowledge Extraction2018Springer International Publishing AGCham, Switzerland295303295–303
24GombrichE. H.The Sense of Order: A Study in the Psychology of Decorative Art1984PhaidonLondon(first published 1979)
25GunningD.AhaD.2019DARPA’s explainable artificial intelligence (XAI) programAI Magazine40445844–58
26HendersonS. G.NelsonB. L.HendersonS. G.NelsonB. L.Chapter 1: Stochastic computer simulationHandbooks in Operations Research and Management Science2006Vol. 13ElsevierAmsterdam, the Netherlands1181–18 https://doi.org/10.1016/S0927-0507(06)13001-7
27HessT. B.Abstract Painting: Background and American Phase1951Viking PressNew York
28IhdeD.Technology and the Lifeworld: From Garden to Earth1990Indiana University PressBloomington and Indianapolis
29IshaiA.FairhallS. L.PepperellR.“Perception, memory and aesthetics of indeterminate art,” Brain Research Bulletin 73, 319–324 (2007)
30IsolaP.ZhuJ.-Y.ZhouT.EfrosA. A.“Image-to-image translation with conditional adversarial networks,” arXiv:161107004 [cs] [Internet], 2018 Nov 26 [cited 2021 Feb 2]
31JaglomH.SquireJ. E.The independent filmmakerThe Movie Business Book19922nd. ed.Fireside (Simon & Schuster)New York, NY748174–81
32KleeneS. C.ShannonC. E.McCarthyJ.Representation of events in nerve nets and finite automataAutomata StudiesAnnals of Mathematics Studies1956Vol. 34Princeton University PressPrinceton, NJ3423–42 https://doi.org/10.1515/9781400882618-002
33KuhnT. S.The Structure of Scientific Revolutions1962University of Chicago PressChicago, IL, USA64(2012)
34LaingC.LordG. J.Stochastic Methods in Neuroscience2010Oxford University PressOxford
35LeCunY.BengioY.HintonG.“Deep learning,” Nature 521, 436–444 (2015)
36MargolisJ.What, After All, is a Work of Art?, Lectures in the Philosophy of Art1999Pennsylvania State University PressUniversity Park, Pennsylvania
37McCarthyJ.MinskyM.RochesterN.ShannonC. E.“A proposal for the Dartmouth summer research project on artificial intelligence,” (1955). http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf Accessed Sept. 1, 2019
38McCaigG.DiPaolaS.unpublished in-lab technical document
39“A. W. Mellon Lectures in the Fine Arts,” accessed September 2, 2019https://www.nga.gov/audio-video/mellon.html
40MillerA. I.The Artist in the Machine: The World of AI-Powered Creativity2019MIT PressCambridge, MA
41MitchellT. M.Machine Learning1997McGraw-HillNew York and London
42MordvintsevA.OlahC.TykaM.Inceptionism: going deeper into neural networksGoogle AI Research Blog2015 https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
43MoszynskaA.Abstract Art1990Thames and HudsonLondon, UK
44MuthC.CarbonC.-C.“The Aesthetic Aha: On the pleasure of having insights into Gestalt,” Acta Psychologica 144, 25–30 (2013)
45MuthC.CarbonC.-C.“SeIns: semantic instability in art,” Art and Perception 4, 145–184 (2016)
46MuthC.HesslingerV. M.CarbonC.-C.“Variants of semantic instability (SeIns) in the arts: A classification study based on experiential reports,” Psychology of Aesthetics, Creativity, and the Arts 12, 11–23 (2018)
47NadinM.“The anticipatory profile. An attempt to describe anticipation as process,” Int. J. General Systems 41, 43–75 (2012)
48NadinM.IantovicsB.KountchevR.Quantifying anticipatory characteristics. The anticipationscopeTM and the anticipatory profileTMAdvanced Intelligent Computational Technologies and Decision Support Systems2014Springer International PublishingCham, Switzerland143160143–60 https://doi.org/10.1007/978-3-319-00467-9_13
49NamuthH.“Jackson Pollock: Paintings Have a Life of Their Own,” n.d., accessed September 7, 2019. https://www.sfmoma.org/watch/jackson-pollock-paintings-have-a-life-of-their-own/
50Obvious, “Obvious explained,” Medium, last modified February 14, 2018. https://medium.com/@hello.obvious/ai-the-rise-of-a-new-art-movement-f6efe0a51f2e
51OrtliebS. A.KügelW. A.CarbonC.-C.2020Fechner (1866): The aesthetic association principle—a commented translationI-Perception1110.1080/03081079.2011.622093
52PaulC.New Media in the White Cube and Beyond: Curatorial Models for Digital Art2008University of California PressBerkeley, CA
53PolanyiM.The Tacit Dimension1966Doubleday & Co.Garden City, NY
54RediesC.“A universal model of esthetic perception based on the sensory coding of natural stimuli,” Spatial Vision 21, 97–117 (2007)
55RumelhartD. E.HintonG. E.WilliamsR. J.1986Learning representations by back-propagating errorsNature323533536533–610.1177/2041669520920309
56RussellS.NorvigP.Artificial Intelligence: A Modern Approach1995Prentice HallUpper Saddle River, NJ
57SeeleyW. P.2006Naturalizing aesthetics: art and the cognitive neuroscience of visionJ. Visual Art Practice5195213195–213
58SiegelJ.Adolph Gottlieb at 70ARTnews1973Vol. 72The Art FoundationNew York, NY575957–9
59SnowC. P.The Two Cultures1959Cambridge University PressNew York, NY(2012) https://doi.org/10.1017/CBO9781139196949
60SprattE. L.“Dream formulations and deep neural networks: humanistic themes in the iconology of the machine-learned image,” arXiv:1802.01274, February 5, 2018
61SpiesW.Max Ernst: Life and Work2006Thames & HudsonLondon
62SzékelyT.BurrageK.2014Stochastic simulation in systems biologyComputat. Struct. Biotechnol. J.142514–25
63TöyryläH.“Transformative and generative in neural art” [Internet], [cited 2021 Feb 2]. http://liipetti.net/visual/transformative-and-generative-in-neural-art/
64VarnedoeK.Pictures of Nothing: Abstract Art Since Pollock2006Princeton University PressPrinceton and Oxford
65VarnedoeK.Pictures of Nothing: Abstract Art since Pollock, Part 1: Why Abstract Art? Audio 1:07:16, release date January 10, 2012 https://soundcloud.com/nationalgalleryofart/pictures-of-nothing-abstract-5
66WagemansJ.2011Towards a new kind of experimental psycho-aesthetics? Reflections on the parallellepipeda projectI-Perception2648678648–7810.1016/j.csbj.2014.10.003
67WangX.BylinskiiZ.HertzmannA.PepperellR.“Toward quantifying ambiguities in artistic images,” ACM Trans. Appl. Perception 17, 1–10 (2020)
68Wikipedia, “Dartmouth Workshop,” last modified July 17, 2019 https://en.wikipedia.org/w/index.php?title=Dartmouth_workshop&oldid=906654002