Back to articles
Editorial
Volume: 5 | Article ID: 000101
Image
From the Special Issue on Multisensory & Crossmodal Interactions Guest Editors
  DOI :  10.2352/J.Percept.Imaging.2022.5.000101  Published OnlineMarch 2022
Subject Areas :
Views 250
Downloads 46
 articleview.views 250
 articleview.downloads 46
  Cite this article 

Lora Likova, Fang Jiang, Noelle R. B. Stiles, Armand R. Tanguay, Jr., "From the Special Issue on Multisensory & Crossmodal Interactions Guest Editorsin Journal of Perceptual Imaging,  2022,  pp 000101-1 - 000101-2,  https://doi.org/10.2352/J.Percept.Imaging.2022.5.000101

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2022
jpi
Journal of Perceptual Imaging
J. Percept. Imaging
J. Percept. Imaging
2575-8144
Society for Imaging Science and Technology
This Special Issue grew out of a Topical Session of the January 2020 Human Vision and Electronic Imaging (HVEI) meeting that was organized by Lora Likova, the Guest Editor-in-Chief of this Special Issue, and was co-chaired with Mark McCourt.
For the Special Issue, we invited contributions of both empirical and review research studies that feature behavioral, brain imaging, and neuroplasticity investigations, as well as methodological research studies. In response, we received an array of high-quality submissions, ranging from basic empirical studies to clinical applications of crossmodal stimulation techniques and review articles. All submissions to the Special Issue have undergone full peer review, maintaining the high editorial standards for regular submissions to the Journal of Perceptual Imaging, IS&T’s open-access journal that sits at the intersection of perception and imaging.
The Special Issue on Crossmodal and Multisensory Interactions begins with a review by Catherine Viengkham and Branka Spehar of crossmodal aesthetic preferences in the auditory and tactile domains in addition to those in the visual domain. Media such as film and music introduce a unique array of rich temporally changing auditory as well as visual experiences, and product design, ranging from furniture to clothing, strongly depends on pleasant tactile as well as visual perceptions. The authors draw attention to the spatial and temporal frequency analysis of image structure, and in particular how fractal dimension affects multisensory aesthetic preferences across these sensory domains.
The theme of crossmodal aesthetics further advances with the article by A. K. M. Rezaul Karim, Sanchary Prativa, and Lora T. Likova, who instigated the first experimental study of tactile aesthetics. Their study was designed to examine the effects of visual experience on both affective and discriminative aspects of tactile perception by comparing the judgments of congenitally blind, late blind, and (blindfolded) sighted participants of tactile objects through active touch. Remarkably, significant differences were identified in the three (visual) affective dimensions of relaxation, hedonics, and arousal as applied to non-visual (tactile) objects, and it is interesting to note that none of these behavioral judgments significantly varied as a function of the level of vision or visual experience. In general, smoother or softer tactile stimuli were preferred over rougher or harder ones. The study reveals the three-dimensional affective structure of visual aesthetics to be amodal in that it is equally applicable to tactile aesthetics. These findings suggest that physical shape and texture parameters not only affect basic tactile discrimination, but differentially mediate tactile preferences and aesthetic appreciation.
The paper by Ryan J. Ward, Sophie M. Wuerger, and Alan Marshall focuses on the question of whether crossmodal interactions are affected by sensory expectations. Their model sense is the olfactory system, and they investigated crossmodal impacts on olfaction for a large sample of observers in an elaborate multisensory paradigm involving smell, shape, color, pitch, musical genre, and emotional dimensions. Consistent crossmodal correspondences were obtained in all cases, most likely mediated by both the knowledge of an odor’s identity and the underlying hedonic ratings. Although knowledge of the smell identity as a cognitive factor played a significant role in the perceived quality of the multimodal experiences, the predominant factor in the strength of the associations was the observer’s emotional response to the crossmodal stimuli.
In the clinical setting, a key crossmodal issue is the degree to which crossmodal interactions are either general across the interacting senses or restricted to particular aspects of their encoding. As an example, compensatory enhancement of visual processing in those with auditory deficits had previously only been studied in the central visual field, and was assumed to be weak in the periphery. Cassandra R. Lee, Elizabeth Groesbeck, O. Scott Gwinn, Michael A. Webster, and Fang Jiang show that the enhanced facial discrimination ability characteristic of deaf individuals is not restricted to foveal presentation, but instead extends to the peripheral visual field, particularly for more complex stimuli.
An interesting and well-established crossmodal illusion is the Ventriloquist Effect, in which the sound source of a voice is often attributed to the visual location of a spatially-displaced speaker whose mouth movements are coordinated with the speech dynamics. Thirsa Huisman, Torsten Dau, Tobias Piechowiak, and Ewen MacDonald analyzed the role of stimulus realism in the strength of the Ventriloquist Effect using a virtual reality setup with a distractor task to control for eye movements. Intriguingly, varying the visual and auditory realism from a flash and a noise burst to a bouncing ball video and sound combination had no significant effect on the perceived mislocalization, implying that the Ventriloquist Effect involves a basic crossmodal sensory function that is relatively independent of the specific nature of the stimuli.
In the everyday world, the color and other visual appearance properties of food and drink strongly affect their perceived taste and flavor, and therefore are key factors in determining consumer acceptance and choice behavior. Charles Spence and Carmel A. Levitan address the context-dependent meaning of the ubiquitous crossmodal effects of color on the taste and flavor of food, contrasting the framework of semantic congruency through statistical learning with emotionally-mediated correspondences in these crossmodal relationships. In their review, the authors discuss several key approaches that have been used to understand and predict the multisensory interactions among the senses, including Ecological Valence Theory, Color-in-Context Theory, and Bayesian Causal Inference. They also suggest the potential value of employing a computational network modeling approach to multisensory integration as a future research direction in the study of contextual effects on the perception of taste and flavor.
A review article by Noelle R. B. Stiles, Armand R. Tanguay, Jr., and Shinsuke Shimojo further expands the scope of this Special Issue by focusing on crossmodal postdiction, which describes the influence of later stimuli in one modality on those preceding them in a different modality. In their review, the authors discuss three basic types of this phenomenon: unimodal postdiction with crossmodal influence, crossmodal postdiction with emergent illusory perception, and crossmodal postdiction with crossmodal illusory perception, and illustrate them with a variety of postdictive phenomena that can occur in the crossmodal context. They also describe the adaptation of three key neuropsychological models to crossmodal postdiction that have been proposed previously for unimodal postdiction, including the Catch Up Model, the Reentry Model, and the Different Pathways Model. They point out that postdiction in general, though seemingly paradoxical, can be understood in several of these models from the perspective that conscious experiences occur with a roughly 500 ms delay from the primary neural signals that give rise to them, allowing time for later-arriving neural responses (e.g., from later sensory stimuli) to influence earlier-arriving neural responses before an integrated perception rises to the level of consciousness.
The Special Issue is rounded out by an article on practical medical applications. Multisensory integration for everyday mobility is a key factor that can become degraded in a wide variety of medical conditions. Jeannette R. Mahoney, Claudene J. George, and Joe Verghese have developed a step-by-step protocol for administering and calculating multisensory integration effects in order to facilitate innovative translational research across diverse clinical and demographic populations. Specifically, they have linked visual-somatosensory integration to attention and motor coordination such as balance, gait, and falls. This protocol has been implemented as an iPhone app for the identification of patients with increased risk of falls, in order to promote physician-initiated risk-of-falls counseling to alleviate disability, promote independence, and increase quality of life for older adults.
This Special Issue highlights contemporary research on multisensory and crossmodal interactions, and advances our current understanding of scientific methods, findings, and theoretical ideas in these fields. Beyond the many questions that the Special Issue addresses, there are many more to be asked and investigated. In particular, a continuously expanding range of theoretical and empirical tools are envisioned that could greatly accelerate the progress of ongoing investigations. Nonetheless, research on multisensory interactions and integration opens up a vast new domain of exciting interdisciplinary inquiries.