Regular
ADAPTIVE TONE MAPPINGARCHAEOLOGYAESTHETIC PERCEPTIONALBERS' PATTERNARAMBIENT LIGHTAPPEARANCE MODEAGINGAUDIOVISUAL
BI-REFLECTANCE DISTRIBUTION FUNCTIONBAND PASSBRDFBILIRUBINBANDINGBORDER LUMINANCE
CONTRAST SENSITIVITYCHROMATICITYCOLOR SPACE CONVERSIONCONTRAST LIMITED HISTOGRAM EQUALIZATIONCOLOR VISION DEFICIENCIESCOLOR DISTANCECOLOR SEPARATIONCOLORIMETRYCONSUMER IMAGESCOLOR FILTER ARRAYCAMERACOLOR DIFFERENCECOLOR CONSTANCY DATABASECOLOR PALETTESCENTER SURROUNDCOLOR PERCEPTIONCOLOUR PREFERENCECONTOURINGCOLOR GAMUTCAMERA PIPELINECOLORCHROMATIC ADAPTATIONCOLOR VISIONCAMERA-RENDERED IMAGESCOLOR FILTERCOLOR SEMANTICSCONTRASTCOLOR FILTER ARRAY OPTIMIZATIONCOLOUR APPEARANCE MODELCURVED OBJECTCOLOR CORRECTIONCOLOURCOMPUTATIONAL MODELINGCOLOR CONSTANCYCAMERA SPECTRAL SENSITIVITY
DIGITAL CAMERADIGITAL HALFTONINGDE-RENDERINGDESCRIPTORDISPLAY GAMUTDEPTH CAMERADETECTIONDEPTH PERCEPTIONDIGITAL PRINTINGDIFFERENT COLOR CENTRESDRIVING AUTOMATIONDONALDSON MATRIXDENTAL MATERIALDUAL LIGHTING CONDITIONDICHROMATIC REFLECTION
EFFECTIVE COLOR RENDERING AND TEMPERATUREERROR DIFFUSIONEMOJI
FACIAL COLOUR APPEARANCEFACIAL SKIN COLORFRACTALFACIAL SHAPEFLUORESCENT OBJECTSFOURIER SPECTRUMFOVEATED IMAGINGFILTER DESIGNFAKE VS REAL IMAGEFACIAL ATTRACTIVENESS
GLOSSINESSGEOMETRIC INTEGRATION
HERITAGEHEAD-UP DISPLAYHALFTONEHUEHYPERSPECTRAL IMAGINGHIGHLIGHT DETECTIONHDRHUMAN COLOR PERCEPTIONHUMAN COLOR VISIONHUE CIRCLEHISTOGRAM SPECIFICATIONHIGH DYNAMIC RANGEHUMAN VISION
INK-USEILLUMINANT ESTIMATIONINTERPOLATIONILLLUMINATION ESTIMATIONIMAGING CONDITION CORRECTIONIMAGE SHARPENINGINTER-REFLECTIONSIMAGE QUALITYIMAGE CODINGINTER-OBSERVER DIFFERENCESIMAGE PROCESSINGILLUMINANCE LEVELS
JPEG
KUBELKA-MUNK MODEL
LIE GROUPSLOW-LEVEL VISIONLOGISTICLOGARITHMIC TONE MAPPINGLIVER DISEASELIGHT FIELDSLOW PASSLIE ALGEBRASLOG POLAR TRANSFORM
METAMERISMMEDIAMULTIPLE LIGHT SOURCESMEMORY COLORMDSMATERIAL APPEARANCEMULTI-ILLUMINANTMULTISPECTRAL IMAGINGMATERIAL-LIGHT INTERACTIONSMEASURING GEOMETRYMULTI-SPECTRAL IMAGINGMATERIAL PERCEPTION
NUMERICAL METHODS ON LIE GROUPSNOISENEWTON'S ITERATIONNEURAL NETWORKNONLINEAR TRANSFORMATION
OPTIMIZATIONOPEN ENVIRONMENTOBSERVER METAMERISM
PRINTINGPSYCHOPHYSICSPOLARIZED LIGHT CAMERAPEAK LUMINANCEPRINTPERCEPTIONPATTERN ILLUMINATIONPERCEPTUAL COLOR GAMUTPROJECTORPRIMARY COLOUR EDITING
QUALITY ASSESSMENTQUALITY ATTRIBUTES
RECEPTIVE FIELDREFLECTION AND LUMINESCENCERADIUS OF CURVATUREREFLECTANCE ESTIMATIONROOM BRIGHTNESSREMOTE DIAGNOSISRAW SENSOR IMAGERETINA
SPECTRAL REFLECTANCESPECTRAL MEASUREMENTSKIN COLORSPECTRAL RECONSTRUCTIONSPECTRAL SUPER-RESOLUTIONSKIN COLOUR PERCEPTIONSPATIAL CHROMATIC CONTRAST SENSITIVITY FUNCTIONSMARTPHONESTRUCTURAL COLORSKIN COLOR MODELSPATIO-SPECTRAL ANALYSISSPATIAL FREQUENCYSPATIAL BRIGHTNESSSPECTRAL RECONSTRUCTION FROM RGBSKIN COLOURSUBJECTIVE EVALUATIONSECONDARY ILLUMINATIONSPECTRAL SIGNAL RECOVERYSIMULTANEOUS CONTRAST EFFECTSPECTRAL ESTIMATIONSPECTRAL TRANSMITTANCESKIN TONESPECTRAL POWER DISTRIBUTION
TRANSLUCENT MATERIALTOTAL APPEARANCETONE MAPPING OPERATORTRAFFIC LIGHTTONGUETMOTONE MAPPINGTRISTIMULUS VALUE
UNDERWATER IMAGE ENHANCEMENTUNSHARP MASKING
VORA-VALUEVALIDATIONVISUAL CLARITYVIRTUAL REALITYVISUAL MODELVISUAL CORTEX
WHITE POINTWEIBULL DISTRIBUTIONWHITE-BALANCE
2.5D PRINTING
3D PRINTING3D MODEL
 Filters
Month and year
 
  52  22
Image
Pages 2 - 18,  © Society for Imaging Science and Technology 2020
Digital Library: CIC
Published Online: November  2020
  183  44
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

We model color contrast sensitivity for Gabor patches as a function of spatial frequency, luminance and chromacity of the background, modulation direction in the color space and stimulus size. To fit the model parameters, we combine the data from five independent datasets, which let us make predictions for background luminance levels between 0.0002 cd/m2 and 10 000 cd/m2, and for spatial frequencies between 0.06 cpd and 32 cpd. The data are well-explained by two models: a model that encodes cone contrast and a model that encodes postreceptoral, opponent-color contrast. Our intention is to create practical models, which can well explain the detection performance for natural viewing in a wide range of conditions. As our models are fitted to the data spanning very large range of luminance, they can find applications in modeling visual performance for high dynamic range and augmented reality displays.

Digital Library: CIC
Published Online: November  2020
  74  11
Image
Pages 7 - 12,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

White lighting and neutral-appearing objects are essential in numerous color applications. In particular, setting or tuning a reference white point is a key procedure in both camera and display applications. Various studies on observer metamerism pointed out that noticeable color disagreements between observers mainly appear in neutral colors. Thus, it is vital to understand how observer metamers of white (or neutral) appear in different colors by different observers. Most observers who participated in a visual demonstration reported that white observer metamers appear pinkish or greenish but rarely yellowish or bluish. In this paper, this intriguing question, "Why observer metamers of white are usually pinkish or greenish?," is addressed based on simulations. Besides, it is also analyzed that which physiological factors play an essential role in this phenomenon and why it is less likely for humans to perceive yellowish or bluish observer metamers of white.

Digital Library: CIC
Published Online: November  2020
  90  30
Image
Pages 13 - 18,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

In this paper, two psychophysical experiments were conducted to explore the effect of peak luminance on the perceptual color gamut volume. The two experiments were designed with two different image data rendering methods: clipping the peak luminance and scaling the image luminance to display's peak luminance capability. The perceptual color gamut volume showed a close linear relationship to the log scale of peak luminance. The results were found not consistent with the computational 3D color appearance gamut volume from previous work. The difference was suspected to be caused by the different perspectives between the computational 3D color appearance gamut volume and the experimental color gamut volume/perceptual color gamut volume.

Digital Library: CIC
Published Online: November  2020
  182  3
Image
Pages 19 - 24,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

Quality assessment is performed through the use of variety of quality attributes. It is crucial to identify relevant attributes for quality assessment. We focus on 2.5D print quality assessment and its quality attributes. An experiment with observers showed the most frequently used attributes to judge quality of 2.5D prints with and without reference images. Colour, sharpness, elevation, lightness, and naturalness are the top five the most frequently used attributes for both with and without reference cases. We observed that content, previous experience and knowledge, and aesthetic appearance may impact quality judgement.

Digital Library: CIC
Published Online: November  2020
  276  28
Image
Pages 25 - 29,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

Previous research has shown the perceptual importance of skin tone appearance and how it contributes to perceived facial attractiveness, yet facial-colour perceptions may vary with different ethnic groups. This research was designed to explore the cross-cultural effects of the facial skin tone on perceived attractiveness between Caucasian (CA) and Chinese (CH) observers. 80 images of real human faces were used for facial attractiveness assessment by the two groups of observers using the categorical judgment method. The results showed overall similar preference but fine-scale differences in the perception of their own-ethnic facial images and other-ethnic facial images. Both groups of observers tended to use different criteria when judging the facial tone of different ethnic groups. Our findings show the aesthetic difference of different cultures in perceptions and underline the important role of ethnic differences with respect to skin tone preference.

Digital Library: CIC
Published Online: November  2020
  100  18
Image
Pages 30 - 35,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

The appearance mode of an object, whether it appears selfluminous or reflective, depends on its luminance and its surrounding. This research aims to verify whether the appearance mode of a spherical lamp ("on" / "off") and perceived room brightness is influenced by the presentation medium: real 3D scenes (R-3D), rendered virtual 3D scenes (VR-3D) presented on a head-mounted-display (HMD) and 2D scenes presented on a regular display (D-2D). Twenty observers evaluated the lamp's appearance mode when presented in different luminance values and rated the apparent room brightness of the scene under four viewing conditions: R3D and D-2D with warm-white scene lighting, and D-2D and VR-3D with cool-white scene lighting. Threshold luminance, defined as the luminance corresponding to a 50-50 chance of perceiving a lamp as switched on, showed large observer variability, which might originate from the diversity of the observers' understanding of the lamp material and their strategy to judge the appearance mode. Respectively, threshold luminance and room brightness were significantly lower and significantly higher for the virtual reality scene than for the other conditions. However, no evidence was found that the appearance mode of a spherical lamp can relevantly predict room brightness.

Digital Library: CIC
Published Online: November  2020
  38  10
Image
Pages 36 - 41,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

Over the years, many CATs (chromatic adaptation transform), typically based on the von Kries coefficient rule, have been developed to predict the corresponding colors under different illuminants. However, these CATs were derived for uniform stimuli surrounded by a uniform adapting field. To investigate the adaptation state under spatially complex illumination, an achromatic matching experiment was conducted under dual lighting conditions with three color pairs and two transition types. It has been found that the transition type has an impact on both the equivalent chromaticity and degree of adaptation. These results can help build a comprehensive von Kries based CAT model, with considering the spatial complexity of illumination.

Digital Library: CIC
Published Online: November  2020
  77  7
Image
Pages 42 - 48,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

Banding is a type of quantisation artefact that appears when a low-texture region of an image is coded with insufficient bitdepth. Banding artefacts are well-studied for standard dynamic range (SDR), but are not well-understood for high dynamic range (HDR). To address this issue, we conducted a psychophysical experiment to characterise how well human observers see banding artefacts across a wide range of luminances (0.1 cd/m2–10,000 cd/m2). The stimuli were gradients modulated along three colour directions: black-white, red-green, and yellow-violet. The visibility threshold for banding artefacts was the highest at 0.1 cd/m2, decreased with increasing luminance up to 100 cd/m2, then remained at the same level up to 10,000 cd/m2. We used the results to develop and validate a model of banding artefact detection. The model relies on the contrast sensitivity function (CSF) of the visual system, and hence, predicts the visibility of banding artefacts in a perceptually accurate way.

Digital Library: CIC
Published Online: November  2020
  4  1
Image
Pages 49 - 64,  © Society for Imaging Science and Technology 2020
Volume 28
Issue 1

We learn the color of objects and scenes through our experience in everyday life. The colors of things that we see more frequently are defined as memory colors. These help us communicate, identify objects, detect crop ripeness or disease, evaluate the weather, and recognize emotions. Color quality has become a priority for the smartphone and camera industry. Color quality assessment (CQA) provides insight into user preference and can be put to use to improve cameras and display pipelines. The memory color of important content like human skin, food, etc. drives perceived color quality. Understanding memory color preference is critical to understanding perceived color quality. In this study, grass, sky, beach sand, green pepper, and skin were used to perform memory color assessment. Observers were asked to adjust patches with four different textures, including computed textures and real image content, according to their memory. The results show that observers adjust the image patch most consistently. In cases where the artificially generated textures closely resembled the real image content, particularly for the sky stimulus, which resembled a flat color patch, participants were able to adjust each sample more consistently to their memory color. To understand the relation between memory color and the color quality preference for camera images, a second experiment was performed. A paired comparison for familiar objects was performed with five different color quality images per object. Two of these five images were rendered from the results of the memory color assessment experiment. Additional images included were the three most preferred color quality images from a rank order CQA. This experiment was performed by naïve observers and a validation experiment was also performed by Munsell Color Science Laboratory observers. The results for color image rendering preference for each memory image content vary. The results show that for most of the colors, people prefer the top three camera color quality images used from the rank order CQA. For grass, however, the color quality preference is highest for one of the memory color assessment results. In this experiment, images rendered to reflect memory color do not match observer preference.

Digital Library: CIC
Published Online: September  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]