Regular
FastTrack
No keywords found
 Filters
Month and year
 
  14  1
Image
Pages 060101-1 - 060101-2,  © Society for Imaging Science and Technology 2019
Digital Library: JIST
Published Online: November  2019
  30  2
Image
Pages 060401-1 - 060401-8,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

The quantification of material appearance is important in product design. In particular, the sparkle impression of metallic paint used mainly for automobiles varies with the observation angle. Although several evaluation methods and multi-angle measurement devices have been proposed for the impression, it is necessary to add more light sources or cameras to the devices to increase the number of evaluation angles. The present study constructed a device that evaluates the multi-angle sparkle impression in one shot and developed a method for quantifying the impression. The device comprises a line spectral camera, light source, and motorized rotation stage. The quantification method is based on spatial frequency characteristics. It was confirmed that the evaluation value obtained from the image recorded by the constructed device correlates closely with a subjective score. Furthermore, the evaluation value is significantly correlated with that obtained using a commercially available evaluation device.

Digital Library: JIST
Published Online: November  2019
  73  21
Image
Pages 060402-1 - 060402-16,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

Detecting changes in an uncontrolled environment using cameras mounted on a ground vehicle is critical for the detection of roadside Improvised Explosive Devices (IEDs). Hidden IEDs are often accompanied by visible markers, whose appearances are a priori unknown. Little work has been published on detecting unknown objects using deep learning. This article shows the feasibility of applying convolutional neural networks (CNNs) to predict the location of markers in real time, compared to an earlier reference recording. The authors investigate novel encoder–decoder Siamese CNN architectures and introduce a modified double-margin contrastive loss function, to achieve pixel-level change detection results. Their dataset consists of seven pairs of challenging real-world recordings, and they investigate augmentation with artificial object data. The proposed network architecture can compare two images of 1920 × 1440 pixels in 27 ms on an RTX Titan GPU and significantly outperforms state-of-the-art networks and algorithms on our dataset in terms of F-1 score by 0.28.

Digital Library: JIST
Published Online: November  2019
  48  1
Image
Pages 060403-1 - 060403-6,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

The Random spray Retinex (RSR) algorithm was developed by taking into consideration the mathematical description of Milano-Retinex. The RSR substituted random paths with random sprays. Mimicking some characteristics of the human visual system (HVS), this article proposes two variants of RSR adding a mechanism of region of interest (ROI). In the first proposed model, a cone distribution based on anatomical data is considered as ROI. In the second model, the visual resolution depending on the visual field based on the knowledge of visual information processing is considered as ROI. We have measured actual eye movements using an eye-tracking system. By using the eye-tracking data, we have simulated the HVS using test images. Results show an interesting qualitative computation of the appearance of the processed area around real gaze points.

Digital Library: JIST
Published Online: November  2019
  27  1
Image
Pages 060404-1 - 060404-7,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

In addition to colors and shapes, factors of material appearance such as glossiness, translucency, and roughness are important for reproducing the realistic feeling of an image. In general, these perceptual qualities are often degraded when reproduced as a digital color image. The authors have aimed to edit the material appearance of an image as measured by a general camera and reproduce it on a general display device. In their previous study, the authors found that the pupil diameter decreases slightly when observing the surface properties of an object and proposed an algorithm called “PuRet” for enhancing the material appearance based on the physiological models of the pupil and retina. However, to obtain an accurate reproduction, it was necessary to manually adjust two types of adaptation parameters in PuRet as related to the retinal response for each scene and the particular characteristics of the display device. This study realizes the management of the appearance of material objects on display devices by automatically deriving the optimum parameters in PuRet from captured RAW image data. The results indicate that the authors succeeded in estimating an adaptation parameter from the median value of the scene luminance as estimated from a RAW image. They also succeeded in estimating another adaptation parameter from the average value of the scene luminance and the luminance contrast value of the output display device. As a result of an experiment using an unknown display device that was not applied to derive the estimation model, it was confirmed that the proposed model works properly.

Digital Library: JIST
Published Online: November  2019
  81  8
Image
Pages 060405-1 - 060405-9,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

Autonomous vehicles rely on the detection and recognition of objects within images to successfully navigate. Design of camera systems is non-trivial and involves trading system specifications across many parameters to optimize performance, such as f-number, focal length, CFA choice, pixel, and sensor size. As such, tools are needed to evaluate and predict the performance of such cameras for object detection. Contrast Detection Probability (CDP) is a relatively new objective image quality metric proposed to rank the performance of camera systems intended for use in autonomous vehicles. Detectability index is derived from signal detection theory as applied to imaging systems and is used to estimate the ability of a system to statistically distinguish objects, most notably in the medical imaging and defense fields. A brief overview of CDP and detectability index is given after which an imaging model is developed to compare and explore the behavior of each with respect to camera parameters. Behavior is compared to matched filter detection performance. It is shown that, while CDP can yield a first order ranking of camera systems under certain constraints, it fails to track detector performance for negative contrast targets and is relatively insensitive.

Digital Library: JIST
Published Online: November  2019
  48  3
Image
Pages 060406-1 - 060406-11,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

The Modulation Transfer Function (MTF) and the Noise Power Spectrum (NPS) characterize imaging system sharpness/resolution and noise, respectively. Both measures are based on linear system theory. However, they are applied routinely to scene-dependent systems applying non-linear, content-aware image signal processing. For such systems, MTFs/NPSs are derived inaccurately from traditional test charts containing edges, sinusoids, noise or uniform luminance signals, which are unrepresentative of natural scene signals. The dead leaves test chart delivers improved measurements from scene-dependent systems but still has its limitations. In this article, the authors validate novel scene-and-process-dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures that characterize (i) system performance concerning one scene, (ii) average real-world performance concerning many scenes or (iii) the level of system scene dependency. The authors also derive novel SPD-NPS and SPD-MTF measures using the dead leaves chart. They demonstrate that the proposed measures are robust and preferable for scene-dependent systems to current measures.

Digital Library: JIST
Published Online: November  2019
  49  3
Image
Pages 060407-1 - 060407-13,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

Spatial image quality metrics designed for camera systems generally employ the Modulation Transfer Function (MTF), the Noise Power Spectrum (NPS) and a visual contrast detection model. Prior art indicates that scene-dependent characteristics of non-linear, content-aware image processing are unaccounted for by MTFs and NPSs measured by traditional methods. The authors present two novel metrics: the log Noise Equivalent Quanta (log NEQ) and Visual log NEQ. They both employ Scene-and-Process-Dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures, which account for signal transfer and noise scene dependency, respectively. The authors also investigate implementing contrast detection and discrimination models that account for scene-dependent visual masking. Also, three leading camera metrics are revised to use the above scene-dependent measures. All metrics are validated by examining correlations with the perceived quality of images produced by simulated camera pipelines. Metric accuracy improved consistently when the SPD-MTFs and SPD-NPSs were implemented. The novel metrics outperformed existing metrics of the same genre.

Digital Library: JIST
Published Online: November  2019
  37  3
Image
Volume 63
Issue 6
Abstract

The principal objective of this research is to create a system that is quickly deployable, scalable, adaptable, and intelligent and provides cost-effective surveillance, both locally and globally. The intelligent surveillance system should be capable of rapid implementation to track (monitor) sensitive materials, i.e., radioactive or weapons stockpiles and person(s) within rooms, buildings, and/or areas in order to predict potential incidents proactively (versus reactively) through intelligence, locally and globally. The system will incorporate a combination of electronic systems that include commercial and modifiable off-the-shelf microcomputers to create a microcomputer cluster which acts as a mini supercomputer which leverages real-time data feed if a potential threat is present. Through programming, software, and intelligence (artificial intelligence, machine learning, and neural networks), the system should be capable of monitoring, tracking, and warning (communicating) the system observer operations (command and control) within a few minutes when sensitive materials are at potential risk for loss. The potential customer is government agencies looking to control sensitive materials and/or items in developing world markets intelligently, economically, and quickly.

Digital Library: JIST
Published Online: November  2019
  78  9
Image
Pages 060409-1 - 060409-11,  © Society for Imaging Science and Technology 2019
Volume 63
Issue 6
Abstract

Modern virtual reality (VR) headsets use lenses that distort the visual field, typically with distortion increasing with eccentricity. While content is pre-warped to counter this radial distortion, residual image distortions remain. Here we examine the extent to which such residual distortion impacts the perception of surface slant. In Experiment 1, we presented slanted surfaces in a head-mounted display and observers estimated the local surface slant at different locations. In Experiments 2 (slant estimation) and 3 (slant discrimination), we presented stimuli on a mirror stereoscope, which allowed us to more precisely control viewing and distortion parameters. Taken together, our results show that radial distortion has significant impact on perceived surface attitude, even following correction. Of the distortion levels we tested, 5% distortion results in significantly underestimated and less precise slant estimates relative to distortion-free surfaces. In contrast, Experiment 3 reveals that a level of 1% distortion is insufficient to produce significant changes in slant perception. Our results highlight the importance of adequately modeling and correcting lens distortion to improve VR user experience.

Digital Library: JIST
Published Online: November  2019