Regular
No keywords found
 Filters
Month and year
 
  7  0
Image
Pages 2 - 8,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

Visual attention models (VAM) try to mimic the human visual system in distinguishing salient regions from non-salient ones in the scene. Only a few attention models propose to detect salient motion in surveillance videos. These model utilizes static features such as color, intensity, orientation, face, and dynamic features such as motion to detect most salient regions in videos. This motivated us to propose a compression algorithm based on visual attention model that is developed specificly for surveillance videos. In this paper we are using a state of the art visual attention model developed by combining bottom-up, top-down, and motion cues. Based on its similarity with experimentally obtained gaze maps evaluated both visually and with quantitative measures, a compression model based on this attention model is proposed for H.264/AVC encoded videos. Our experimental results show that we can encode videos with same or better quality than those obtained with the standard baseline profile of the JM 18.0 reference encoder, while reducing the file size uptil 22%.

Digital Library: CIC
Published Online: January  2012
  13  1
Image
Pages 9 - 14,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

In this paper we investigate a method for selectively modifying a video stream using a color contrast sensitivity model based on the human visual system. The model identifies regions of high variance with frame to frame differences that are visually imperceptible to a human observer with normal color vision. The model is based on the CIELAB and the CIE ΔE94 color difference formula, and takes advantage of the nature of frame-based progressive video coding.The method was found to achieve up to 35% improvement in data compression without perceptible degradation of the video quality. As expected, the amount of compression improvement obtained is dependent on the type of video content being compressed.

Digital Library: CIC
Published Online: January  2012
  9  0
Image
Pages 15 - 20,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

The dynamic range of many real-world environments greatly exceeds the dynamic range of display systems based on liquid-crystal (LC) panels. Using a matrix of locally-addressable high-power LEDs, both the black level and the brightness can be improved. In this contribution, we describe a method for boosting luminance without installing additional light. The method is applicable to colour-sequential displays. We also validate the performance of the solution in terms of average luminance gain on a large set of video data. The results show an average luminance increase of 143% for 240 backlight segments.

Digital Library: CIC
Published Online: January  2012
  5  0
Image
Pages 21 - 29,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

This talk is about the state of color over the two decades of our Color Imaging Conference (CIC). It describes what led to the first meeting. It has a general discussion of two of the important paradigms in thinking about color over these 20 years; the expansion of colorimetry to device profiles; and the expansion of color constancy to spatial image processing. It also describes the critical role of measuring human color image vision to understand how to design systems that reproduce appearances found in art, photography and image processing.

Digital Library: CIC
Published Online: January  2012
  3  0
Image
Pages 30 - 35,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

We present a color thesaurus with over 9000 color names in ten different languages. Instead of using conventional psychophysical experiments, we use a statistical framework that is based on search results from Google Image Search. For each color name we compute a significance distribution in CIELAB space whose maximum indicates the location of the color name in CIELAB. A first analysis discusses the quality of the estimations in the context of human language. Further, we conduct an advanced analysis supporting our choice to use a statistical method. Finally, we demonstrate that a color name mainly depends on the chromatic values and varies more along the lightness axis.

Digital Library: CIC
Published Online: January  2012
  7  0
Image
Pages 36 - 40,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

An online memory matching game is used to explore the collective performance for stimulus sets varying in color, text and texture. The game is a consistent five wide by four high array of 70 by 70 pixels squares for a total of eleven unique test pairs. Users click squares to find matching pairs and once all of the pairs have been found the time to complete and number of clicks to complete are saved on a server. For the initial draft of this paper eleven images sets were tested. In three cases the test pairs were solid colors and corresponded to a basic color set and two non-basic color sets. In four cases the test pairs were the text corresponding to color terms. The text cases included black 12 pixel Arial for the basic color terms and two test cases for the non-basic color terms used previously. The text cases also included a set with Stroop colored basic color terms or text colored roughly the opposite to the corresponding color term. Finally, four texture image sets taken from the Outex texture database were used for testing. Two of the texture sets were for higher key, mostly white textures sets of wallpaper and flour. Two of the texture sets were for more colorful images of textiles and floors. The analysis is presented for a web-based experimental game based on the completion time and number of clicks to completion. The use of single simple game unifies each of these perception and memory tasks.

Digital Library: CIC
Published Online: January  2012
  16  2
Image
Pages 41 - 46,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

The White-Patch method, one of the very first colour constancy methods, estimates the illuminant colour from the maximum response of three colour channels. However, this simple method has been superseded by advanced physical, statistical and learning based colour constancy methods. Recently, a few research works have suggested that the simple idea of using maximum pixel values is not as limited an idea as it seems on first glance. These works show that in several situations some manipulations can indeed made it perform very well. Here, we extend the White-Patch assumption to include any of: white patch, highlights or light source; let us refer to these pixels in an image as the “bright” pixels areas. We propose that bright pixels are surprisingly helpful in the illumination estimation process.In this paper, we investigate the effects of bright pixels on several current colour constancy algorithms. Moreover, we describe a simple framework for an illumination estimation method based on bright pixels and compare its accuracy to well-known colour constancy algorithms applied to four standard datasets. We also investigate failure and success cases, using bright pixels, and propose desiderata on input images with regard to the proposed method.

Digital Library: CIC
Published Online: January  2012
  9  2
Image
Pages 47 - 51,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

This paper proposes a method for estimating the illuminant spectral power distributions and their relative positional relationship of multiple light sources under a complex illumination environment. A multiband camera system is used for capturing spectral images of dielectric objects in a scene. First, dielectric object surfaces are segmented into regions with different object colors, by using the hue angle of the diffuse reflection component. Specular highlights are used as a clue for estimating the light source information, which are detected on curved object surfaces with the different object colors. The illuminant spectra of light sources are estimated from the camera data for highlight areas detected on each surface region. Then, the illuminant spectral estimates are obtained for a different set of light sources. Next, the geometric information, such as the number of light sources and their relative positional relationship is predicted based on the set of estimated illuminant spectra on the segmented surface regions. The algorithm of probabilistic relaxation labeling is used for classification of the detected highlights and the estimated spectra. The feasibility of the proposed method is examined in experiments on real scenes.

Digital Library: CIC
Published Online: January  2012
  5  0
Image
Pages 52 - 56,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

Automatic white balancing works quite well on average, but seriously fails some of the time. These failures lead to completely unacceptable images. Can the number, or severity, of these failures be reduced, perhaps at the expense of slightly poorer white balancing on average, with the overall goal being to increase the overall acceptability of a collection of images? Since the main source of error in automatic white balancing arises from misidentifying the overall scene illuminant, a new illuminationestimation algorithm is presented that minimizes the high percentile error of its estimates. The algorithm combines illumination estimates from standard existing algorithms and chromaticity gamut characteristics of the image as features in a feature space. Illuminant chromaticities are quantized into chromaticity bins. Given a test image of a real scene, its feature vector is computed, and for each chromaticity bin, the probability of the illuminant chromaticity falling into a chromaticity bin given the feature vector is estimated. The probability estimation is based on Loftsgaarden-Quesenberry multivariate density function estimation over the feature vectors derived from a set of synthetic training images. Once the probability distribution estimate for a given chromaticity channel is known, the smallest interval that is likely to contain the right answer with a desired probability (i.e., the smallest chromaticity interval whose sum of probabilities is greater or equal to the desired probability) is chosen. The point in the middle of that interval is then reported as the chromaticity of the illuminant. Testing on a dataset of real images shows that the error at the 90th and 98th percentile ranges can be reduced by roughly half, with minimal impact on the mean error.

Digital Library: CIC
Published Online: January  2012
  5  1
Image
Pages 57 - 62,  © Society for Imaging Science and Technology 2012
Volume 20
Issue 1

Reflectance functions can be represented by low-dimensional linear models with weighted sum of principal components (or, often, referred to as basis functions). Such method to obtain a low-dimensional linear model is based on principal component analysis (PCA). The specific requirement for a low-dimensional model is to accumulate fraction of variance of the basis functions. The more basis functions included the more fraction of variance accumulates. The investigation of how many basis functions required so as to represent reflectance functions accurately has been extensively studied over the last two decades [1, 2] since Cohen fitted a linear model to spectral reflectance functions of Munsell color chips in 1964 [3]. In this paper, a comprehensive dataset of 97593 including six types of materials has been accumulated. These materials are paint, graphic, plastic, textile, skin and natural samples. Principal component analysis for each material has been studied. The effective dimension of reflectance functions representations for these materials has examined. It was found that a single set of basis functions can essentially be applied to represent all spectra in the world.

Digital Library: CIC
Published Online: January  2012