Regular
Art Historyaugmented realityassimilationArtistic License
Bootstrapping statisticsbrightnessBrushstroke Analysis
color deficiencycolor visionColour Vision DeficienciesColourcolor blindnesscontrastcolor sensationcolour vision deficiencyCload gaming qualityCamera Monitor Systems (CMS)Computer VisionContent characterizationColour Emotion
Depth perceptionDistance estimation
edge integration
Face Recognitionfixational eye movementsFine-Art AnalysisField of View
Image compressionImage Analysis
Keypoint Extraction
lightnesslayer 4Last safe gap
monocularity and binocularityMachine Learning
ON and OFF pathways
pictorial detailPerceived qualityPerceptionpseudo-isochromatic platesprimary visual cortexPassive viewing
Quality of Experience
Regression analysisRene MagritteRoad safety
surreal artscissionsimultaneous contrastSegmentation
Time-domain Analysistransparency
VRR Displayvergencevirtual realityvisual area V1VRR Flickervisual processingvisual artvision
360-degree images
 Filters
Month and year
 
  61  21
Image
Pages 193-1 - 193-3,  © 2025 Society for Imaging Science and Technology 2025
Volume 37
Issue 11
Abstract

Optical see-through Augmented Reality (OST-AR) is a developing technology with exciting applications including medicine, industry, education, and entertainment. OST-AR creates a mix of virtual and real using an optical combiner that blends images and graphics with the real-world environment. Such an overlay of visual information is simultaneously futuristic and familiar: like the sci-fi navigation and communication interfaces in movies, but also much like banal reflections in glass windows. OSTAR’s transparent displays cause background bleed-through, which distorts color and contrast, yet virtual content is usually easily understandable. Perceptual scission, or the cognitive separation of layers, is an important mechanism, influenced by transparency, depth, parallax, and more, that helps us see what is real and what is virtual. In examples from Pepper’s Ghost, veiling luminance, mixed material modes, window shopping, and today’s OST-AR systems, transparency and scission provide surprising – and ordinary – results. Ongoing psychophysical research is addressing perceived characteristics of color, material, and images in OST-AR, testing and harnessing the perceptual effects of transparency and scission. Results help both understand the visual mechanisms and improve tomorrow’s AR systems.

Digital Library: EI
Published Online: February  2025
  10  1
Image
Pages 194-1 - 194-6,  © 2025, Society for Imaging Science and Technology
Volume 37
Issue 11
Abstract

The vergence of subjects was measured while they observed 360-degree images of a virtual reality (VR) goggle. In our previous experiment, we observed a shift in vergence in response to the perspective information presented in 360-degree images when static targets were displayed within them. The aim of this study was to investigate whether a moving target that an observer was gazing at could also guide his vergence. We measured vergence when subjects viewed moving targets in 360-degree images. In the experiment, the subjects were instructed to gaze at the ball displayed in the 360-degree images while wearing the VR goggle. Two different paths were generated for the ball. One of the paths was the moving path that approached the subjects from a distance (Near path). The second path was the moving path at a distance from the subjects (Distant path). Two conditions were set regarding the moving distance (Short and Long). The moving distance of the left ball in the long distance condition was a factor of two greater than that in the short distance condition. These factors were combined to created four conditions (Near Short, Near Long, Distant Short and Distant Long). And two different movement time (5s and 10s) were designated for the movement of the ball only in the short distance conditions. The moving time of the long distance condition was always 10s. In total, six types of conditions were created. The results of the experiment demonstrated that the vergence was larger when the ball was in close proximity to the subjects than when it was at a distance. That was that the perspective information of 360-degree images shifted the subjects’ vergence. This suggests that the perspective information of the images provided observers with high-quality depth information that guided their vergence toward the target position. Furthermore, this effect was observed not only for static targets, but also for moving targets.

Digital Library: EI
Published Online: February  2025
  80  23
Image
Pages 195-1 - 195-6,  © 2025 Society for Imaging Science and Technology 2025
Volume 37
Issue 11
Abstract

Ubiquitous throughout the history of photography, white borders on photo prints and vintage Polaroids remain useful as new technologies including augmented reality emerge for general use. In contemporary optical see-through augmented reality (OST-AR) displays, physical transparency limits the visibility of dark stimuli. However, recent research shows that simple image manipulations, white borders and outer glows, have a strong visual effect, making dark objects appear darker and more opaque. In this work, the practical value of known, inter-related effects including lightness induction, glare illusion, Cornsweet illusion, and simultaneous contrast are explored. The results show promising improvements to visibility and visual quality in future OST-AR interfaces.

Digital Library: EI
Published Online: February  2025
  3  0
Image
Pages 199-1 - 199-8,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Volume 37
Issue 11
Abstract

This study investigates how different camera perspectives presented in digital rear-view mirrors in vehicles, also known as Camera Monitor Systems, impact drivers’ distance judgment and decision-making in dynamic driving scenarios. The study examines (1) the effects of field of view and (2) camera height on drivers' ability to judge distances to rearward vehicles and to select safe gaps in potentially hazardous situations. A controlled lab-based video experiment was conducted, involving 27 participants who performed distance estimations and last safe gap selections using a simulated side-view mirror display. Participants viewed prerecorded driving scenarios with varying combinations of field of view (40°, 76°, 112°) and camera heights (1 meter, 2.3 meter). No significant effects were found for camera height, but wider field of views led to more accurate distance estimations. However, the use of a wider field of view also increased the risk of potentially dangerous overestimations of distance, as evidenced by the last safe gap results. This suggests that a wider field of view leads to the selection of smaller and potentially risky gaps. Conversely, narrow field of views resulted in underestimations of distance, potentially leading to overly cautious and less efficient driving decisions. These findings inform Camera Monitor Systems design guidelines on how to improve driver perception and road safety, to reduce accidents from vehicle distance misjudgments.

Digital Library: EI
Published Online: February  2025
  75  22
Image
Pages 202-1 - 202-9,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2025
Volume 37
Issue 11
Abstract

The topic of how colour emotion and colour vision deficiencies interact with each other is barely researched, and existing studies have contradicting results. This study will investigate how these two topics interact with each other and try to prove that colour emotion is affected by colour vision deficiencies. This was done through an online colour-emotion associations questionnaire in two phases. The first phase had 60 participants, of which 15 reported having colour vision deficiencies and the second had 18 participants, of which 8 were identified to have colour vision deficiencies. Within the questionnaires, the participants selected emotions from the Geneva Emotion Wheel which they associated with 12 colour patches or 4 colour terms and then rated how strong this association is from 1 to 5. Results show indications that colour vision deficiencies lead to reduced strength of colour-emotion associations and a higher number of people who do not associate emotions with certain or all colours. Additionally, it was found that the colour vision deficiency group associates fewer emotions with each colour than the normal vision group and differences in specific colour-emotion associations were found between the two groups.

Digital Library: EI
Published Online: February  2025
  4  1
Image
Pages 203-1 - 203-5,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Volume 37
Issue 11
Abstract

Pseudo-isochromatic plates with varying background lightness were used in a display-based colour vision experiment. As in previous work, the background was found to have a small effect on the ability of observers to identify the patterns on the plates. The time taken to recognize the patterns was significantly affected for both colour-normal and colour vision deficient observers, with darker backgrounds associated with shorter response times and white backgrounds having the longest response times, suggesting a greater degree of task difficulty. Possible reasons for the results when test plates were presented with a white background are suggested, including lower apparent colourfulness and greater difficulty in visual integration of the figure.

Digital Library: EI
Published Online: February  2025
  2  0
Image
Pages 204-1 - 204-7,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Volume 37
Issue 11
Abstract

Image data is now commonly represented in any of a myriad of file formats, from proprietary raw formats through standards like JPEG or PNG. In addition to recording pixel values, most of these formats contain metadata that defines how pixel values translate into colors and brightnesses. Unfortunately, when non-trivial computations are used to transform pixel values, it is often necessary that both the metadata and file format used be changed. For example, using a program called parsek, this tool aligns images and produces a “raw” super resolution result which has problematic shifts in color and tonality. The goal of the current work has been to understand what is causing these shifts and how they can be avoided. Toward that goal, the color and tonal metadata information representations in a variety of file formats is reviewed. The effectiveness of preserving appearance when file format is changed is then reviewed. A reference color and tonal rendering is obtained by examining specific metadata and preview renderings embedded in several file formats.

Digital Library: EI
Published Online: February  2025
  3  0
Image
Pages 206-1 - 206-7,  © 2025, Society for Imaging Science and Technology
Volume 37
Issue 11
Abstract

Rudd and Zemach analyzed brightness/lightness matches performed with disk/annulus stimuli under four contrast polarity conditions, in which the disk was either a luminance increment or decrement with respect to the annulus, and the annulus was either an increment or decrement with respect to the background. In all four cases, the disk brightness—measured by the luminance of a matching disk—exhibited a parabolic dependence on the annulus luminance when plotted on a log-log scale. Rudd further showed that the shape of this parabolic relationship can be influenced by instructions to match the disk’s brightness (perceived luminance), brightness contrast (perceived disk/annulus luminance ratio), or lightness (perceived reflectance) under different assumptions about the illumination. Here, I compare the results of those experiments to results of other, recent, experiments in which the match disk used to measure the target disk appearance was not surrounded by an annulus. I model the entire body of data with a neural model involving edge integration and contrast gain control in which top-down influences controlling the weights given to edges in the edge integration process act either before or after the contrast gain control stage of the model, depending on the stimulus configuration and the observer’s assumptions about the nature of the illumination.

Digital Library: EI
Published Online: February  2025
  18  4
Image
Pages 207-1 - 207-10,  © 2025, Society for Imaging Science and Technology
Volume 37
Issue 11
Abstract

At HVEI-2012, I presented a neurobiologically-based model for trichromatic color sensations in humans, mapping the neural substrate for color sensations to V1-L4—the thalamic recipient layer of the primary visual cortex. In this paper, I propose that V1-L4 itself consists of three distinct sub-layers that directly correspond to the three primary color sensations: blue, red, and green. Furthermore, I apply this model to three aspects of color vision: the three-dimensional (3D) color solid, dichromatism, and ocular agnosticism. Regarding these aspects further: (1) 3D color solid: V1-L4 is known to exhibit a gradient of cell densities from its outermost layer (i.e., its pia side) to its innermost layer (i.e., its white matter side). Taken together with the proposition that the population size of a cell assembly directly corresponds with the magnitude of a color sensation, it can be inferred that the neurobiologically-based color solid is a tilted cuboid. (2) Chromatic color blindness: Using deuteranopia as an example, at the retinal level, M-cones are lost and replaced by L-cones. However, at the cortical level, deuteranopia manifests as a fusion of the two bottom layers of V1-L4. (3) Ocular agnosticism: Although color sensation is monocular, we normally are not aware of which eye we are seeing with. This visual phenomenon can be explained by the nature of ocular integration within V1-L4. A neurobiologically-based model for human color sensations could significantly contribute to future engineering efforts aimed at enhancing human color experiences.

Digital Library: EI
Published Online: February  2025
  6  2
Image
Pages 209-1 - 209-8,  © 2025, Society for Imaging Science and Technology
Volume 37
Issue 11
Abstract

The impact of image compression algorithms varies significantly across image contents in a way that is challenging to predict. The ongoing trend towards richer visual content, e.g. High Dynamic Range and Wide Color Gamut, increases both the relevance and complexity of this issue. This study analyzes first grades of perceived quality of compressed images to determine in which proportion their variance is due to compression levels, image content and compression type, respectively. An ANOVA analysis on 3 HDR datasets indicates that the variance of the subjective evaluations is due for 45-62% to the compression level and for 7-10% to the image content. Secondly, we present a framework for identifying which features calculated on the source images are efficient to predict the part of image content in grades of perceived quality of compressed images. We build on traditional regression analysis by adding an adaptation of the recent Model Class Reliance approach. In an experiment on 6 published datasets of subjective quality grades of compressed images, OLS-R and KNN models predicting the grades are built using two input variables: the compression level and one feature characterizing the original content. The Empirical Model Reliance is then calculated to measure the importance of the content feature in the regression model as well as the Model Class Reliance to bound the impact of a reduced fit to the training data, i.e. indicating robustness towards generalization. Results show that traditional regression analysis alone is not robust for identifying the most relevant features and confirms that when the most useful features for SDR are SI/block contrast measures, other features characterize HDR content best, such as DR and color features (colorfulness or saturation).

Digital Library: EI
Published Online: February  2025

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]