Regular
depth perception
natural scene statistics
perception of distance
 Filters
Month and year
 
  311  80
Image
Pages 000409-1 - 000409-12,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Light-permeable materials are usually characterized by perceptual attributes of transparency, translucency, and opacity. Technical definitions and standards leave room for subjective interpretation on how these different perceptual attributes relate to optical properties and one another, which causes miscommunication in industry and academia alike. A recent work hypothesized that a Gaussian function or a similar bell-shaped curve describes the relationship between translucency on the one hand, and transparency and opacity, on the other hand. Another work proposed a translucency classification system for computer graphics, where transparency, translucency and opacity are modulated by three optical properties: subsurface scattering, subsurface absorption, and surface roughness. In this work, we conducted two psychophysical experiments to scale the magnitude of transparency and translucency of different light-permeable materials to test the hypothesis that a Gaussian function can model the relationship between transparency and translucency, and to assess how well the aforementioned classification system describes the relationship between optical and perceptual properties. We found that the results vary significantly between the shapes. While bell-shaped relationship between transparency and translucency has been observed for spherical objects, this was not generalized to a more complex shape. Furthermore, how optical properties modulate transparency and translucency is also dependent on the object shape. We conclude that these cross-shape differences are rooted in different image cues generated by different object scales and surface geometry.

Digital Library: JPI
Published Online: September  2022
  433  95
Image
Pages 000503-1 - 000503-12,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Both natural scene statistics and ground surfaces have been shown to play important roles in visual perception, in particular, in the perception of distance. Yet, there have been surprisingly few studies looking at the natural statistics of distances to the ground, and the studies that have been done used a loose definition of ground. Additionally, perception studies investigating the role of the ground surface typically use artificial scenes containing perfectly flat ground surfaces with relatively few non-ground objects present, whereas ground surfaces in natural scenes are typically non-planar and have a large number of non-ground objects occluding the ground. Our study investigates the distance statistics of many natural scenes across three datasets, with the goal of separately analyzing the ground surface and non-ground objects. We used a recent filtering method to partition LiDAR-acquired 3D point clouds into ground points and non-ground points. We then examined the way in which distance distributions depend on distance, viewing elevation angle, and simulated viewing height. We found, first, that the distance distribution of ground points shares some similarities with that of a perfectly flat plane, namely with a sharp peak at a near distance that depends on viewing height, but also some differences. Second, we also found that the distribution of non-ground points is flatter and did not vary with viewing height. Third, we found that the proportion of non-ground points increases with viewing elevation angle. Our findings provide further insight into the statistical information available for distance perception in natural scenes, and suggest that studies of distance perception should consider a broader range of ground surfaces and object distributions than what has been used in the past in order to better reflect the statistics of natural scenes.

Digital Library: JPI
Published Online: July  2022
  252  46
Image
Special Issue : Special Issue on Multisensory & Crossmodal Interactions
Pages 000101-1 - 000101-2,  © Society for Imaging Science and Technology 2022
Digital Library: JPI
Published Online: March  2022
  268  64
Image
Special Issue : Special Issue on Multisensory & Crossmodal Interactions
Pages 000406-1 - 000406-14,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

The investigation of aesthetics has primarily been conducted within the visual domain. This is not a surprise, as aesthetics has largely been associated with the perception and appreciation of visual media, such as traditional artworks, photography, and architecture. However, one doesn’t need to look far to realize that aesthetics extends beyond the visual domain. Media such as film and music introduce a unique and equally rich temporally changing visual and auditory experience. Product design, ranging from furniture to clothing, strongly depends on pleasant tactile evaluations. Studies involving the perception of 1/f statistics in vision have been particularly consistent in demonstrating a preference for a 1∕f structure resembling that of natural scenes, as well as systematic individual differences across a variety of visual objects. Interestingly, comparable findings have also been reached in the auditory and tactile domains. In this review, we discuss some of the current literature on the perception of 1∕f statistics across the contexts of different sensory modalities.

Digital Library: JPI
Published Online: March  2022
  426  69
Image
Pages 000501-1 - 000501-18,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

In this work we study the perception of suprathreshold translucency differences to expand the knowledge about material appearance perception in imaging and computer graphics, and 3D printing applications. Translucency is one of the most considerable appearance attributes that significantly affects the look of objects and materials. However, the knowledge about translucency perception remains limited. Even less is known about the perception of translucency differences between materials. We hypothesize that humans are more sensitive to small changes in absorption and scattering coefficients when optically thin materials are examined and when objects have geometrically thin parts. To test these hypotheses, we generated images of objects with different shapes and subsurface scattering properties and conducted psychophysical experiments with these visual stimuli. The analysis of the experimental data supports these hypotheses and based on post experiment comments made by the observers, we argue that the results could be a demonstration of a fundamental difference between translucency perception mechanisms in see-through and non-see-through objects and materials.

Digital Library: JPI
Published Online: March  2022
  225  54
Image
Pages 000502-1 - 000502-15,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Medical image data is critically important for a range of disciplines, including medical image perception research, clinician training programs, and computer vision algorithms, among many other applications. Authentic medical image data, unfortunately, is relatively scarce for many of these uses. Because of this, researchers often collect their own data in nearby hospitals, which limits the generalizabilty of the data and findings. Moreover, even when larger datasets become available, they are of limited use because of the necessary data processing procedures such as de-identification, labeling, and categorizing, which requires significant time and effort. Thus, in some applications, including behavioral experiments on medical image perception, researchers have used naive artificial medical images (e.g., shapes or textures that are not realistic). These artificial medical images are easy to generate and manipulate, but the lack of authenticity inevitably raises questions about the applicability of the research to clinical practice. Recently, with the great progress in Generative Adversarial Networks (GAN), authentic images can be generated with high quality. In this paper, we propose to use GAN to generate authentic medical images for medical imaging studies. We also adopt a controllable method to manipulate the generated image attributes such that these images can satisfy any arbitrary experimenter goals, tasks, or stimulus settings. We have tested the proposed method on various medical image modalities, including mammogram, MRI, CT, and skin cancer images. The generated authentic medical images verify the success of the proposed method. The model and generated images could be employed in any medical image perception research.

Digital Library: JPI
Published Online: March  2022
  243  43
Image
Special Issue : Special Issue on Multisensory & Crossmodal Interactions
Pages 000401-1 - 000401-7,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with American sign language and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, the authors examined whether deaf individuals’ enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Their results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.

Digital Library: JPI
Published Online: February  2022
  191  41
Image
Special Issue : Special Issue on Multisensory & Crossmodal Interactions
Pages 000402-1 - 000402-12,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Olfaction is ingrained into the fabric of our daily lives and constitutes an integral part of our perceptual reality. Within this reality, there are crossmodal interactions and sensory expectations; understanding how olfaction interacts with other sensory modalities is crucial for augmenting interactive experiences with more advanced multisensorial capabilities. This knowledge will eventually lead to better designs, more engaging experiences, and enhancing the perceived quality of experience. Toward this end, the authors investigated a range of crossmodal correspondences between ten olfactory stimuli and different modalities (angularity of shapes, smoothness of texture, pleasantness, pitch, colors, musical genres, and emotional dimensions) using a sample of 68 observers. Consistent crossmodal correspondences were obtained in all cases, including our novel modality (the smoothness of texture). These associations are most likely mediated by both the knowledge of an odor’s identity and the underlying hedonic ratings: the knowledge of an odor’s identity plays a role when judging the emotional and musical dimensions but not for the angularity of shapes, smoothness of texture, perceived pleasantness, or pitch. Overall, hedonics was the most dominant mediator of crossmodal correspondences.

Digital Library: JPI
Published Online: February  2022
  133  37
Image
Special Issue : Special Issue on Multisensory & Crossmodal Interactions
Pages 000403-1 - 000403-16,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Postdiction occurs when later stimuli influence the perception of earlier stimuli. As the multisensory science field has grown in recent decades, the investigation of crossmodal postdictive phenomena has also expanded. Crossmodal postdiction can be considered (in its simplest form) the phenomenon in which later stimuli in one modality influence earlier stimuli in another modality (e.g., Intermodal Apparent Motion). Crossmodal postdiction can also appear in more nuanced forms, such as unimodal postdictive illusions (e.g., Apparent Motion) that are influenced by concurrent crossmodal stimuli (e.g., Crossmodal Influence on Apparent Motion), or crossmodal illusions (e.g., the Double Flash Illusion) that are influenced postdictively by a stimulus in one or the other modality (e.g., a visual stimulus in the Illusory Audiovisual Rabbit Illusion). In this review, these and other varied forms of crossmodal postdiction will be discussed. Three neuropsychological models proposed for unimodal postdiction will be adapted to the unique aspects of processing and integrating multisensory stimuli. Crossmodal postdiction opens a new window into sensory integration, and could potentially be used to identify new mechanisms of crossmodal crosstalk in the brain.

Digital Library: JPI
Published Online: February  2022
  78  20
Image
Special Issue : Special Issue on Multisensory & Crossmodal Interactions
Pages 000404-1 - 000404-10,  © Society for Imaging Science and Technology 2022
Volume 5
Abstract

Despite more than 60 years of research, it has remained uncertain if and how realism affects the ventriloquist effect. Here, a sound localization experiment was run using spatially disparate audio-visual stimuli. The visual stimuli were presented using virtual reality, allowing for easy manipulation of the degree of realism of the stimuli. Starting from stimuli commonly used in ventriloquist experiments, i.e., a light flash and noise burst, a new factor was added or changed in each condition to investigate the effect of movement and realism without confounding the effects of an increased temporal correlation of the audio-visual stimuli. First, a distractor task was introduced to ensure that participants fixated their eye gaze during the experiment. Next, movement was added to the visual stimuli while maintaining a similar temporal correlation between the stimuli. Finally, by changing the stimuli from the flash and noise stimuli to the visuals of a bouncing ball that made a matching impact sound, the effect of realism was assessed. No evidence for an effect of realism and movement of the stimuli was found, suggesting that, in simple scenarios, the ventriloquist effect might not be affected by stimulus realism.

Digital Library: JPI
Published Online: February  2022

Keywords

[object Object] [object Object] [object Object]