Regular
ARTWORKSAUTOMATED MACHINESART & SCIENCEARTIFICIAL SCOTOMAART GALLERYART EXPERIENCEACR
BATTERY LIFEBODY IMAGE
CONTEXTUAL EFFECTSCAMERA CHARACTERIZATIONCOLORIZINGCOLOR QUALITYCYBER SICKNESSCOLOR PERCEPTIONCONTEXTCOLLABORATIVE CREATIVITYCHILDREN WITH CHRONIC HEALTH CONDITIONSCONTRAST ENERGYCROSS-CONTENT COMPARISONS
DIFFUSE LIGHTINGDEEP LEARNINGDIBR SYNTHESIS VIEWDYNAMIC RANGEDutch 17th century paintingsDESIGNDRAWINGDEEP NEURAL NETWORKDECISON TREE MODELDEAD LEAVES
EMOTIONEYE MOVEMENTSEYE MOVEMENTECOLOGICAL TESTINGEYE TRACKING
FOVEAL & PERIPHERAL VISIONFIRST-PERSON VIDEOS
gloss perceptionGAZE-CONTIGENT DISPLAYGLOSS PERCEPTIONGRAPHIC ARTSGAZE RESPONSE
HUMAN VISUAL SYSTEMHUMAN VISION MODELINGHARDWARE ADAPTIONHDRHDR VIDEOHUMAN PERCEPTIONHANDEDNESS SWITCHHUMAN VISUAL PERCEPTIONHDR COMFORTHIGH DYNAMIC RANGEHUMAN VISIONHDR COLORIMETRYHUMAN-HUMAN COMMUNICATION DYNAMICSHUMAN SKIN APPEARANCEHUMAN VISION ABSTRACTION
IMAGE CODINGIMMERSIVE MULTIMEDIAILLUSTRATIONIMAGE UTILITY ASSESSMENTIMAGING APPLICATIONSIMAGE PROCESSINGIDEAL OBSERVERINTERNAL VISUAL NOISEIMAGE QUALITY ASSESSMENT
LIGHT ADAPTATIONLED LUMINAIRESLIGHTINGLDRLEARNING
MODULATION TRANSFER FUNCTIONMPEG-DASHMEMORYMULTISPECTRALMOBILE VIDEO PROCESSINGMIRRORMAXIMUM LIKELIHOOD DIFFERENCE SCALINGMATERIAL PERCEPTIONMICROSACCADEMEANING MAKINGMINDMOSMUSEUMMEANINGFUL-ENGAGEMENTS
NEUROPLASTICITYNATURAL IMAGESNEUROREHABILITATIONNO-REFERENCE MODEL
ONLINE MUSEUM FOR SELF-IMPROVEMENT (OMSI) NEURO-PHENOMENOLOGICAL/EPISTEMIC APPROACHOBJECT IDENTIFICATIONOMNIDIRECTIONAL VIDEOONLINE MUSEUM COLLECTIONSOPTIMUM FREQUENCY BAND PARTITION
PAIR COMPARISONPUPILLOMETRYPANORAMIC VIDEOPERCEIVED DYNAMIC RANGEPHYSICAL DISPLAY PARAMETERSPAIRWISE COMPARISONS SCALINGPERCEPTUAL INTELLIGENCEPERFORMANCE COMPARISONPLANAR ABSTRACTIONPAINTINGSPAIRWISE COMPARISONSPHOTOGRAPHY
QUALITY OF EXPERIENCEQUALITY
ROLE OF TEXTURE INFORMATIONREAL-TIME ADAPTABILITYREAL SURFACESREFLECTIONROLE OF STRUCTURE INFORMATION
SUBBAND IMAGE CODINGSCENE PERCEPTIONSPECULAR SEPARATIONSPONTANEOUS PROBLEM-SOLVING AND DECISION-MAKINGSIM2SPATIALLY VARYING COLOR CORRECTIONSELF-PORTRAITSCANPATHSUBJECTIVE QUALITY ASSESSMENTSUBJECTIVE VIDEO QUALITY ASSESSMENTSTABILITYSYSTEM GAMMASIGNAL DETECTIONSETTINGSACCADE DETECTIONSEMANTICSPECTRAL SENSITIVITIESSIMULATIONSTEREO VISIONSIMULATOR SICKNESSSALIENCY MAPSUBJECTIVE STUDYSUBJECTIVE METHODOLOGYSTEREOSCOPIC CINEMATOGRAPHYSPATIAL-TEMPORAL
TRUCK CRANE OPERATIONTONE MAPPINGTEXTURE SYNTHESISTASK-BASED PARAMETRIC IMAGE METRICTRAININGTEXTURE VISIBILITYTRICHROMAT MODELS
VIRTUAL REALITYVISUAL DISCOMFORTVIEWING EXPERIENCEVIEWING BEHAVIORVISUAL PERCEPTIONVISUAL FIELD DEFECTVISUALIZATIONVISUAL REPRESENTATIONVISUAL ATTENTIONVIEWER EXPERIENCEVISUAL DETECTIONVIDEO QUALITY EVALUATIONVIDEO QUALITYVISUALLY INDUCED MOTION SICKNESSvisual patternsVISUAL ART
WHITE BALANCE
360° VIDEO3D
 Filters
Month and year
 
  17  2
Image
Pages 1 - 7,  © Society for Imaging Science and Technology 2018
Digital Library: EI
Published Online: January  2018
  10  0
Image
Pages 1 - 11,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

The Field of View (FoV), the Field of Resolution, and the Field of Contrast Sensitivity describe three progressively more detailed descriptions of human spatial sensitivity at angles relative to fixation. The FoV is the range of visual angles that can be sensed by an eye. The Field of Resolution describes the highest spatial frequency that can be sensed at each angle. The Field of Contrast Sensitivity describes contrast sensitivity at each spatial frequency, at each visual angle. These three concepts can be unified with the aid of the Pyramid of Visibility, a simplified model of contrast sensitivity as a function of spatial frequency, temporal frequency, and luminance or retinal illuminance. This unified model provides simple yet powerful observations about the Field of Contrast Sensitivity. I have fit this model to a number of published measurements of peripheral contrast sensitivity. This allows us to test the validity of the model, and to estimate its parameters. Although the model is a simplification, I believe it provides an invaluable guide in a range of applications in visual technology.

Digital Library: EI
Published Online: January  2018
  29  3
Image
Pages 1 - 10,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

In this paper, we show how specific properties of human visual perception can be used for improving the perceived image quality, often without a need for enhancing physical display parameters. The focus of our work is on depth perception and improvement of viewing comfort on stereoscopic and multiscopic displays by exploiting complex interactions between monocular depth cues (such as motion parallax) and binocular vision. We also consider stereoscopic cinematographic applications, where proper handling of scene cuts, reflective/refractive objects, and film grain requires special attention to provide the required level of perceptual quality. Finally, we discuss gaze-driven depth manipulations to enhance perceived scene depth, and we present our predictor for saccade landing position which significantly reduces undesired effects of inherent system latency in foveated rendering applications.

Digital Library: EI
Published Online: January  2018
  11  2
Image
Pages 1 - 11,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

Perceptual Intelligence concerns the extraordinary creative genius of the mind's eye, the mind's ear, etcetera. Using π humans actively construct their perceptions of the world. We need a thorough interdisciplinary understanding of these mechanisms in order to be able to design perceptually intelligent products (including tools, systems and services). Using a new science of lighting as a vehicle, we will concretize this scientifically informed design approach, its possibilities and challenges in a dynamic complex world.

Digital Library: EI
Published Online: January  2018
  17  3
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

This study aims at understanding the effects of homogeneous visual field defects on ocular movements and exploratory patterns according to their peripheral or central location. A gaze-contingent paradigm was implemented in order to display images to the participants while masking in real-time either central or peripheral areas of the participant's field of view. Results indicate a strong relation between saccade amplitudes and mask sizes. Fixations are predominantly directed toward parts of the scene which are left unmasked. In a second set of analyses, we defined relative angle as an angle between a saccade vector and a preceding one. We show that backward saccades are more frequently produced with central masking. As for peripheral masking, we observe that participants explore the scene in a sequential scanning pattern seldom foveating back to an area attended in the previous seconds. We discuss how masking conditions affect ocular behaviours in terms of exploratory patterns, as well as how relative angles unveil characteristic information distinguishing the two masking conditions from each other and from control subjects.

Digital Library: EI
Published Online: January  2018
  55  23
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

A dual channel spatial-temporal model for contrast detection is shown to improve the predictions for the spatial-temporal Modelfest data over those of the simplified Barten model.

Digital Library: EI
Published Online: January  2018
  60  2
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

Computing dynamic range of high dynamic range (HDR) content is an important procedure when selecting the test material, designing and validating algorithms, or analyzing aesthetic attributes of HDR content. It can be computed on a pixel-based level, measured through subjective tests or predicted using a mathematical model. However, all these methods have certain limitations. This paper investigates whether dynamic range of modeled images with no semantic information, but with the same first order statistics as the original, natural content, is perceived the same as for the corresponding natural images. If so, it would be possible to improve the perceived dynamic range (PDR) predictor model by using additional objective metrics, more suitable for such synthetic content. Within the subjective study, three experiments were conducted with 43 participants. The results show significant correlation between the mean opinion scores for the two image groups. Nevertheless, natural images still seem to provide better cues for evaluation of PDR.

Digital Library: EI
Published Online: January  2018
  11  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

The dynamic range of real world scenes may vary from around 102 to greater than 107, whilst the dynamic range of monitors may vary from 102 to 105. In this paper, we investigate the impact of the dynamic range ratio (DRratio) between the captured scene and the displayed image, upon the value of system gamma preferred by subjects (a simple global power law transformation applied to the image). To do so, we present an image dataset with a broad distribution of dynamic ranges upon various sub-ranges of a SIM2 monitor. The full dynamic range of the monitor is 105 and we present images using either the full range, 75% or 50% of this, while maintaining a fixed mid-luminance level. We find that the preferred system gamma is inversely correlated with the DRratio and importantly, is one (linear) when the DRratio is one. This strongly suggests that the visual system is optimized for processing images only when the dynamic range is presented correctly. The DRratio is not the only factor. By using 50% of the monitor dynamic range and using either the lower, middle or upper portion of the monitor, we show that increasing the overall luminance level also increases the preferred system gamma, although to a lesser extent than the DRratio.

Digital Library: EI
Published Online: January  2018
  18  6
Image
Pages 1 - 8,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

Viewing of High Dynamic Range (HDR) video on capable displays poses many questions to our understanding of perceptual preference and vision science. One of the most fundamental aspects is the role played by light adaptation, as HDR content and displays allow for substantially increased light adaptation changes. In contrast, the traditional formats of standard dynamic range (SDR), being at best 3log10, kept the luminance ranges well within the steady state ranges of photoreceptor responses [14]. HDR video systems exceed the 3log10 luminance range, can be as high as 5log10 for professional displays, and be over a 6log10 range for laboratory research displays. In addition to the well-understood photoreceptor component of light adaptation is the pupillary component. While its light modulation is much smaller in range than the photoreceptor's adaptation range, it nevertheless has engineering consequences, and has been cited as a cause of putative discomfort with some HDR viewing. To better understand its role in light adaptation and discomfort, this study measured pupil behavior during naturalistic viewing of HDR video on a professional display, and performed various analyses.

Digital Library: EI
Published Online: January  2018
  5  1
Image
Pages 1 - 7,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 14

First-Person Videos (FPVs) captured by body-mounted cameras are usually too shaky to watch comfortably. Many approaches, either software-based or hardware-based, are proposed for stabilization. Most of them are designed to maximize stability of videos. However, according to our previous work [1], FPVs need to be carefully stabilized to maintain their First-Person Motion information (FPMI). To stabilize FPVs appropriately, we propose a new video stability estimator Viewing Experience under "Central bias + Uniform" model (VECU) for FPVs on the basis of [1]. We first discuss stability estimators and their role in applications. Based on the discussion and our application target, we design a subjective test using real scene videos with synthetic camera motions to help us to improve the human perception model proposed in [1]. The proposed estimator VECU measures the absolute stability and the experimental results show that it has a good interval scale and outperforms existing stability estimators in predicting subjective scores.

Digital Library: EI
Published Online: January  2018

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]