Regular
assimilationAffective computingart, aesthetics, and emotion
bio-inspired networks
compact networkscomputational imagingcausal brain connectivitycreativityCSFcontrastConvolutional Neural Networkcerebellumcolor spacescartography
distortiondeep learning
flickerFMRIfundamental vision, perception, cognition researchFPS
Generative Adversarial Networksgraphic arts
historyHMDhead-mounted displayhuman visionHuman Visual System
Image Codingimage quality assessmentimage processingimage scrambling
JPEG
lightness
Mammogrammachine learningMedical Image PerceptionmappingMach bandsmultisensory perceptionMicro-expressions
Noisy labels
Omnidirectional Video
psychophysicsperceptionperceptual information hidingperceptual aftereffectsprivacyperceptual color mapsperceptual approaches to image qualityPerceptual Biasperception mechanism
Quality Estimation
representationRetinex
shitsukanscreen brightnessspatialStereoscopic image quality assessment
Tile-based
UFOVuseful field of view
video codingvisualizationVideo Qualityvirtual realityvisual difference predictorvisual and cognitive issues in imaging and analysisVisual Searchviewing distancevisual perceptionvisually lossless compressionvisual human factors of traditional and head-mounted displaysvisual cognitionvision, audition, haptics, multisensory
WebP
3D Saliency
 Filters
Month and year
 
  25  1
Image
Pages A11-1 - A11-6,  © Society for Imaging Science and Technology 2021
Digital Library: EI
Published Online: January  2021
  46  5
Image
Pages 110-1 - 110-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Due to the use of 3D contents in various applications, Stereo Image Quality Assessment (SIQA) has attracted more attention to ensure good viewing experience for the users. Several methods have been thus proposed in the literature with a clear improvement for deep learning-based methods. This paper introduces a new deep learning-based no-reference SIQA using cyclopean view hypothesis and human visual attention. First, the cyclopean image is built considering the presence of binocular rivalry that covers the asymmetric distortion case. Second, the saliency map is computed taking into account the depth information. The latter aims to extract patches on the most perceptual relevant regions. Finally, a modified version of the pre-trained vgg-19 is fine-tuned and used to predict the quality score through the selected patches. The performance of the proposed metric has been evaluated on 3D LIVE phase I and phase II databases. Compared with the state-of-the-art metrics, our method gives better outcomes.

Digital Library: EI
Published Online: January  2021
  163  20
Image
Pages 112-1 - 112-6,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Radiologists and pathologists frequently make highly consequential perceptual decisions. For example, visually searching for a tumor and recognizing whether it is malignant can have a life-changing impact on a patient. Unfortunately, all human perceivers— even radiologists—have perceptual biases. Because human perceivers (medical doctors) will, for the foreseeable future, be the final judges of whether a tumor is malignant, understanding and mitigating human perceptual biases is important. While there has been research on perceptual biases in medical image perception tasks, the stimuli used for these studies were highly artificial and often critiqued. Realistic stimuli have not been used because it has not been possible to generate or control them for psychophysical experiments. Here, we propose to use Generative Adversarial Networks (GAN) to create vivid and realistic medical image stimuli that can be used in psychophysical and computer vision studies of medical image perception. Our model can generate tumor-like stimuli with specified shapes and realistic textures in a controlled manner. Various experiments showed the authenticity of our GAN-generated stimuli and the controllability of our model.

Digital Library: EI
Published Online: January  2021
  31  4
Image
Pages 151-1 - 151-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Computer simulations of an extended version of a neural model of lightness perception [1,2] are presented. The model provides a unitary account of several key aspects of spatial lightness phenomenology, including contrast and assimilation, and asymmetries in the strengths of lightness and darkness induction. It does this by invoking mechanisms that have also been shown to account for the overall magnitude of dynamic range compression in experiments involving lightness matches made to real-world surfaces [2]. The model assumptions are derived partly from parametric measurements of visual responses of ON and OFF cells responses in the lateral geniculate nucleus of the macaque monkey [3,4] and partly from human quantitative psychophysical measurements. The model’s computations and architecture are consistent with the properties of human visual neurophysiology as they are currently understood. The neural model's predictions and behavior are contrasted though the simulations with those of other lightness models, including Retinex theory [5] and the lightness filling-in models of Grossberg and his colleagues [6].

Digital Library: EI
Published Online: January  2021
  107  22
Image
Pages 152-1 - 152-8,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Visibility of image artifacts depends on the viewing conditions, such as display brightness and distance to the display. However, most image and video quality metrics operate under the assumption of a single standard viewing condition, without considering luminance or distance to the display. To address this limitation, we isolate brightness and distance as the components impacting the visibility of artifacts and collect a new dataset for visually lossless image compression. The dataset includes images encoded with JPEG andWebP at the quality level that makes compression artifacts imperceptible to an average observer. The visibility thresholds are collected under two luminance conditions: 10 cd/m2, simulating a dimmed mobile phone, and 220 cd/m2, which is a typical peak luminance of modern computer displays; and two distance conditions: 30 and 60 pixels per visual degree. The dataset was used to evaluate existing image quality and visibility metrics in their ability to consider display brightness and its distance to viewer. We include two deep neural network architectures, proposed to control image compression for visually lossless coding in our experiments.

Digital Library: EI
Published Online: January  2021
  75  16
Image
Pages 153-1 - 153-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Contrast sensitivity functions (CSFs) describe the smallest visible contrast across a range of stimulus and viewing parameters. CSFs are useful for imaging and video applications, as contrast thresholds describe the maximum of color reproduction error that is invisible to the human observer. However, existing CSFs are limited. First, they are typically only defined for achromatic contrast. Second, even when they are defined for chromatic contrast, the thresholds are described along the cardinal dimensions of linear opponent color spaces, and therefore are difficult to relate to the dimensions of more commonly used color spaces, such as sRGB or CIE L*a*b*. Here, we adapt a recently proposed CSF to what we call color threshold functions (CTFs), which describe thresholds for color differences in more commonly used color spaces. We include color spaces with standard dynamic range gamut (sRGB, YCbCr, CIE L*a*b*, CIE L*u*v*) and high dynamic range gamut (PQ-RGB, PQ-YCbCr and ICTCP). Using CTFs, we analyze these color spaces in terms of coding efficiency and contrast threshold uniformity.

Digital Library: EI
Published Online: January  2021
  61  11
Image
Pages 155-1 - 155-8,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

The present study proposes the method to improve the perceptual information hiding in image scramble approaches. Image scramble approaches have been used to overcome the privacy issues on the cloud-based machine learning approach. The performance of image scramble approaches are depending on the scramble parameters; because it decides the performance of perceptual information hiding. However, in existing image scramble approaches, the performance by scrambling parameters has not been quantitatively evaluated. This may be led to show private information in public. To overcome this issue, a suitable metric is investigated to hide PIH, and then scrambling parameter generation is proposed to combine image scramble approaches. Experimental comparisons using several image quality assessment metrics show that Learned Perceptual Image Patch Similarity (LPIPS) is suitable for PIH. Also, the proposed scrambling parameter generation is experimentally confirmed effective to hide PIH while keeping the classification performance.

Digital Library: EI
Published Online: January  2021
  30  6
Image
Pages 156-1 - 156-10,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

The history of cartography has been marked by the endless search for the perfect form for the representation of the information on a spherical surface manifold into the flat planar format of the printed page or computer screen. Dozens of cartographic formats have been proposed over the centuries from ancient Greek times to the present. This is an issue not just for the mapping of the globe, but in all fields of science where spherical entities are found. The perceptual and representational advantages and drawbacks of many of these formats are considered, particularly in the tension between a unified representation, which is always distorted in some dimension, and a minimally distorted representation, which can only be obtained by segmentation into sectorial patches. The use of these same formats for the mapping of spherical manifolds are evaluated, from quantum physics through the mapping of the brain to the large-scale representation of the cosmos.

Digital Library: EI
Published Online: January  2021
  191  23
Image
Pages 157-1 - 157-8,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Facial micro-expressions are quick, involuntary and low intensity facial movements. An interest in detecting and recognizing micro-expressions arises from the fact that they are able to show person’s genuine hidden emotions. The small and rapid facial muscle movements are often too difficult for a human to not only spot the occurring micro-expression but also be able to recognize the emotion correctly. Recently, a focus on developing better micro-expression recognition methods has been on models and architectures. However, we take a step back and go to the root of task, the data. We thoroughly analyze the input data and notice that some of the data is noisy and possibly mislabelled. The authors of the micro-expression datasets have also acknowledged the possible problems in data labelling. Despite this, no attempts have been made to design models that take into account the potential mislabelled data in micro-expression recognition, to our best knowledge. In this paper, we explore new methods taking noisy labels into special account in an attempt to solve the problem. We propose a simple, yet efficient label refurbishing method and a data cleaning method for handling noisy labels. The data cleaning method achieves state-of-the-art results in the MEGC2019 composite dataset.

Digital Library: EI
Published Online: January  2021
  60  8
Image
Pages 158-1 - 158-6,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 11

Concerns about head mounted displays have led to numerous studies about their potential impact on the visual system. Yet, none have investigated if the use of Virtual Reality (VR) Head Mounted Displays with their reduced field of view and visually soliciting visual environment, could reduce the spatial spread of the attentional window. To address this question, we measured the useful field of vision in 16 participants right before playing a VR game for 30 minutes and immediately afterwards. The test involves calculation of a presentation time threshold necessary for efficient perception of a target presented in the centre of the visual field and a target presented in the periphery. The test consists of three subtests with increasing difficulty. Data comparison did not show significant difference between pre-VR and post-VR session (subtest 2: F(1,11) = .7 , p = .44; subtest 3 F(1,11) = .9 , p = .38). However, participants’ performances for central target perception decreased in the most requiring subtest (F(1,11) = 8.1, p = .02). This result suggests that changes in spatial attention could be possible after prolonged VR presentation.

Digital Library: EI
Published Online: January  2021

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]