Regular
FastTrack
No keywords found
 Filters
Month and year
 
  5  2
Image
Page 020101-1,  © Society for Imaging Science and Technology 2018
Digital Library: JIST
Published Online: March  2018
  15  4
Image
Pages 020501-1 - 020501-12,  © Society for Imaging Science and Technology 2018
Volume 62
Issue 2
Abstract

Current digital cameras have various automatic control systems. In automatic camera systems, extracting focusness from an image is a very important problem. Automatic extraction of the main subject makes taking photos very easy, even for an amateur photographer. Methods have been proposed to evaluate focusness by visual saliency, which assume that an area with high saliency also has high focusness. However, various differences exist between focusness and saliency. In this study, the authors compare the values between focusness and saliency maps. They evaluate the focusness of 80 images in an image evaluation experiment with 20 observers. Saliency maps are calculated using six conventional algorithms. They show that the individual variations of focusness are insignificant in images that include only one major object. Furthermore, the authors apply a GIST feature to the saliency method by using a center–surround histogram and extract focusness from images with high accuracy.

Digital Library: JIST
Published Online: March  2018
  15  2
Image
Pages 020502-1 - 020502-8,  © Society for Imaging Science and Technology 2018
Volume 62
Issue 2
Abstract

This article proposes a video encoding method at the residual quad-tree (RQT) level of high-efficiency video coding (HEVC) for JND model-based perceptual coding. This method performs perceptual encoding considering the luminance adaptation characteristics of the human visual system at the RQT level, which determines the partitioning information of the transform unit after determining the motion vectors through motion estimation (a highly complex module in an encoder). In each RQT stage, the proposed algorithm determines the maximum quantization parameter (QP) that reduces the bitrate and simultaneously maintains similar subjective quality, i.e., ensuring no visual quality degradation based on the JND model compared to the initial QP value. To evaluate the performance of the proposed method, a JND model-based RQT-level multi-loop encoding method is applied to the HM16.0 HEVC reference software. Experimental results with a random access configuration of the common test condition show that the bitrate is reduced, on average, by 5.78% and 10% for Class A and Class B videos, respectively; the maximum reduction is 25.3% with the almost same visual quality.

Digital Library: JIST
Published Online: March  2018
  21  2
Image
Pages 020503-1 - 020503-11,  © Society for Imaging Science and Technology 2018
Volume 62
Issue 2
Abstract

In this article, the authors present a model for color images. Specifically, they show that the color channels of an RGB image can be written as a color-dependent local mean multiplied by a color-independent residual. This model is derived from a generalized class of image formation models. The authors present experimental results showing that this model accurately describes images from a Sigma DP1 camera, a Pentax K-3 II camera, and the Kodak dataset. They also discuss how this model can be used to design a simple chrominance-based denoising system and present results for that system.

Digital Library: JIST
Published Online: March  2018
  56  10
Image
Pages 020504-1 - 020504-12,  © Society for Imaging Science and Technology 2018
Volume 62
Issue 2
Abstract

The spectral reflectance of an object surface provides valuable information of its characteristics. Reflectance reconstruction from multispectral image data is typically based on certain assumptions. One of the common assumptions is that the same illumination is used for system calibration and image acquisition. The authors propose the concept of multispectral constancy which transforms the captured sensor data into an illuminant-independent representation, analogously to the concept of computational color constancy. They propose to transform the multispectral image data to a canonical representation through spectral adaptation transform (SAT). The performance of such a transform is tested on measured reflectance spectra and hyperspectral reflectance images. The authors also investigate the robustness of the transform to the inaccuracy of illuminant estimation in natural scenes. Results of reflectance reconstruction show that the proposed SAT is efficient and is robust to error in illuminant estimation.

Digital Library: JIST
Published Online: March  2018
  51  4
Image
Pages 020505-1 - 020505-12,  © Society for Imaging Science and Technology 2018
Volume 62
Issue 2
Abstract

Based on the characteristic that the information of remote sensing images varies with the level of image degradation, a new method of remote sensing image quality assessment based on the ratio of spatial feature weighted mutual information is proposed. Firstly, the reference remote sensing image and the distorted image are decomposed using the spatial steerable pyramid. Then the mutual information between the reference remote sensing image and the perceived image through the visual distortion channel is calculated on each scale. Meanwhile the mutual information between the degraded remote sensing image and the perceived image through the visual distortion channel is calculated on each scale. Then the weighting factors of the phase congruency and the location saliency are added to the two calculated mutual information. At last, the spatial feature weighted mutual information of the reference remote sensing image and that of the degraded remote sensing image on each scale is summed up. The ratio of the two is calculated to obtain the global quality index, Remote Sensing Image Quality Assessment Index (RSIQA). Experimental results show that the proposed method has high degree of subjective and objective consistency, and high evaluating effectiveness for the remote sensing images. In addition, it works better than most state-of-the-art IQA indices on the natural images databases.

Digital Library: JIST
Published Online: March  2018
  29  2
Image
Pages 020506-1 - 020506-8,  © Society for Imaging Science and Technology 2018
Volume 62
Issue 2
Abstract

As stereoscopic three-dimensional (3D) content has gained much popularity, the depth perception of viewers has become an important field of research. However, the relationship between 3D depth parameters and users’ perceptual characteristics has not yet been sufficiently investigated. In this study, we performed psychophysical experiments to investigate changes in the depth perception in relation to pupil distance (PD), 3D display type, viewing distance, and stimuli position. Our research is novel in the following three ways. First, we found that the disparity required for stereoscopic vision increased with the PD, and every increase of 1 mm in the PD value required an additional disparity (contents disparity) of 0.369 mm (1.39 pixels) to perceive the optimal 3D depth. Second, the polarizing filter method required more disparity than the method using shutter glasses. Finally, to provide 3D content with optimal stereoscopic depth, we demonstrated that different functional values, such as PD and 3D display type, should be considered.

Digital Library: JIST
Published Online: March  2018