Regular
aestheticsaffective responses to visual sensationsAttention Disruption
Bayesian cue reweighting
cataract operations
Free Viewing
Gaze Contingent DisplayGaze Analysis Techniques
H.265 Encoder
neuroaesthetics
Peripheral SensitivityPara-Foveapost-cataract experiencesPeri-Fovea
real-world vision
Spatio-Temporal Supra-Threshold Distortionssuper sharpnesssuper depthSemantic Gaze Analysis
 Filters
Month and year
 
  66  4
Image
Pages 1 - 2,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

In perceptual sciences we face a clear predominance of the visual domain. Actually, even job opportunities and textbooks orient to this trend which was already initiated by the first scientists who systematically addressed perceptual phenomena. One reason might be the relative ease of testing visual stimuli; another reason might be a particular focus on visual phenomena also in everyday life. Due to this 150 years lasting neglect of other sensory domains, we face a huge lack of knowledge of holistic perception comprising multisensory experience. This results in a problem in understanding multisensory phenomena particularly in product usage where specifically the interaction between haptics and vision plays a major role. Here, I will present a functional model of haptic aesthetics that is aimed to link the domain of haptics with other modalities.

Digital Library: EI
Published Online: February  2016
  31  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

Numerous studies have found that congenitally blind individuals have better verbal memory than their normally sighted counterparts. However, it is not known whether this reflects a superiority of verbal abilities or of memory abilities. In order to distinguish between these possibilities, we tested congenitally blind participants and age-matched, normally sighted control participants on verbal and spatial memory tasks, as well as on verbal fluency tasks and a spatial imagery task. Congenitally blind participants were significantly better than sighted controls on the verbal memory and verbal fluency tasks, but not on the spatial memory or spatial imagery tasks. Thus, the congenitally blind have superior verbal, but not spatial, abilities. This may be related to their greater reliance on verbal information and to the growing literature endorsing involvement of visual cortex in language processing in the congenitally blind.

Digital Library: EI
Published Online: February  2016
  11  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

The senses have traditionally been studied separately, but it is now recognised that the brain is just as richly multisensory as is our natural environment. This creates fresh challenges for understanding how complex multisensory information is organised and coordinated around the brain. Take timing for example: the sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their neural signals from each modality arrive at different multisensory areas in the brain at different times. How do we nevertheless perceive the synchrony of the original events correctly? It is popularly assumed that this is achieved via some mechanism of multisensory temporal recalibration. But recent work from my lab on normal and pathological individual differences show that sight and sound are nevertheless markedly out of synch by different amounts for each individual and even for different tasks performed by the same individual. Indeed, the more an individual perceive the same multisensory event as having an auditory lead and an auditory lag at the same time. This evidence of apparent temporal disunity sheds new light on the deep problem of understanding how neural timing relates to perceptual timing of multisensory events. It also leads to concrete therapeutic applications: for example, we may now be able to improve an individual’s speech comprehension by simply delaying sound or vision to compensate for their individual perceptual asynchrony.

Digital Library: EI
Published Online: February  2016
  23  3
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary edge images (black edges on white background or white edges on black background) have been used to represent features (edges and cusps) in scenes. However, the polarity of cusps and edges may contain important depth information (depth from shading) which is lost in the binary edge representation. This depth information may be restored, to some degree, using bipolar edges. We compared recognition rates of 16 binary edge images, or bipolar features, by 26 subjects. Object recognition rates were higher with bipolar edges and the improvement was significant in scenes with complex backgrounds.

Digital Library: EI
Published Online: February  2016
  16  3
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

Not all the visual information acquired by the eyes is processed by the human visual system. Human visual attention selects information most relevant to the task being attended. Unexpected objects get overlooked when we are busy attending something else. Inattentional blindness is a psychological phenomenon where a visual stimulus goes unnoticed by the observer. This paper presents a research study on using subliminal cues to cause subliminal attention shift in a video scene showcasing inattentional blindness. The goal of this work is to develop and optimize video processing systems for applications such as surveillance and driving safety. Preliminary results indicated that subliminal visual cueing helped in making aware of other objects or events happening in a scene other than the prime task.

Digital Library: EI
Published Online: February  2016
  28  9
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

A demonstration of the vividness of peripheral color vision is provided by arrays of multicolored disks scaled with eccentricity. These demonstrations are designed to correct the widespread misconception that peripheral color vision is weak or non-existent. In fact, both small and large disks of color scaled with eccentricity demonstrate that color perception is just as strong in throughout the periphery as in the fovea, under appropriate viewing conditions. Moreover, further demonstrations with cone-isolating motion stimuli indicate that motion perception is undiminished with rod activation silenced by the choice of colors with equal activation strengths for the rod spectral sensitivity.

Digital Library: EI
Published Online: February  2016
  20  2
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

It is not clear to date how visual deprivation affects auditory spatial perception. Recent psychophysical evidences described a spatial auditory deficit in congenitally blind individuals while some others found a spatial auditory improvement. Particularly, Gori, Sandini, Martinoli, & Burr (2014) and Vercillo, Milne, Gori, & Goodale (2015) reported that people who were born blind were less efficient in localizing sound sources with respect to auditory landmarks than sighted individuals. On the other side, blind people performed similarly or even better than sighted participants during the localization of single sound sources. We investigated auditory spatial perception in early blind using different auditory spatial tasks and found that blind individuals did not succeed in localizing sound sources in an external frame of reference. The performance of early blind was severely impaired during the localization of brief auditory stimuli with respect to acoustic landmarks (allocentric frame of reference) but was comparable to that one of sighted participants when they had to localize sounds with respect to their own body (egocentric reference frame). Our results suggest that, after early visual deprivation, auditory spatial perception is centered on an egocentric reference system.

Digital Library: EI
Published Online: February  2016
  57  15
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

We present an image quality metric based on the transformations associated with the early visual system: local luminance subtraction and local gain control. Images are decomposed using a Laplacian pyramid, which subtracts a local estimate of the mean luminance at multiple scales. Each pyramid coefficient is then divided by a local estimate of amplitude (weighted sum of absolute values of neighbors), where the weights are optimized for prediction of amplitude using (undistorted) images from a separate database. We define the quality of a distorted image, relative to its undistorted original, as the root mean squared error in this “normalized Laplacian” domain. We show that both luminance subtraction and amplitude division stages lead to significant reductions in redundancy relative to the original image pixels. We also show that the resulting quality metric provides a better account of human perceptual judgements than either MS-SSIM or a recently-published gain-control metric based on oriented filters.

Digital Library: EI
Published Online: February  2016
  11  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

Touch-screen displays in cell phones and tablet computers are now pervasive, making them an attractive option for vision testing outside of the laboratory or clinic. Here we describe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatiotemporal modulations are easily measured using the same method. A prototype has been developed for Apple Computer’s iPad/iPod/iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing, http://hsi.arc.nasa.gov/groups/scanpath/research.php). Preliminary data show good agreement with estimates obtained from traditional psychophysical methods as well as newer rapid estimation techniques. Issues relating to device calibration are also discussed.

Digital Library: EI
Published Online: February  2016
  8  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 16

The mere presence of spatiotemporal distortions in digital videos does not have to imply quality degradation since distortion visibility can be strongly reduced by the perceptual phenomenon of visual masking. Flicker is a particularly annoying occurrence, which can arise from a variety of distortion processes. Yet flicker can also be suppressed by masking. We propose a perceptual flicker visibility prediction model which is based on a recently discovered visual change silencing phenomenon. The proposed model predicts flicker visibility on both static and moving regions without any need for content-dependent thresholds. Using a simple model of cortical responses to video flicker, an energy model of motion perception, and a divisive normalization stage, the system captures the local spectral signatures of flicker distortions and predicts perceptual flicker visibility. The model not only predicts silenced flicker distortions in the presence of motion, but also provides a pixel-wise flicker visibility index. Results show that the predicted flicker visibility model correlates well with human percepts of flicker distortions tested on the LIVE Flicker Video Database and is highly competitive with current flicker visibility prediction methods.

Digital Library: EI
Published Online: February  2016

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]