Regular
ADAPTIVE FRAME RATE
BACKGROUND ESTIMATIONBACKGROUND CLEANING
CANON POWERSHOTCAMERA CALIBRATIONCELL PHONECOLOR ACCURACYCOLOR MANAGEMENTCHDK (CANON HACK DEVELOPMENT KIT)COLOR CORRECTION MATRIXCROSSTALKCHARACTERIZTIONCAMERA ARRAYSCOLORCOMPUTATIONAL PHOTOGRAPHYCAMERA PIPELINE
DISOCCLUSION ARTIFACTSDISPARITY ESTIMATIONDISPARITYDEFECT PIXEL CORRECTION
HIGH FRAME RATEHIGH-SPEED VIDEO
IMAGE QUALITYIMAGE SIGNAL PROCESSING PIPELINEIMAGE RESTORATIONINTERFEROMETER
MEASUREMENTMTFMRF
NOISE
PROFILINGPHOTOGRAMMETRY
RANGE CAMERARGBD FUSIONROBUST PCARGBD PROCESSINGRESOLUTION
SENSITIVITY ANALYSISSCENE SEGMENTATIONSTEREOSPECTRAL SENSITIVITY
TIME DOMAIN CONTINUOUS IMAGINGTDCI (TIME DOMAIN CONTINUOUS IMAGING)TEXTURETIK
VIDEO COMPRESSIONVIEW SYNTHESISVIDEO COMPOSITING
3D CAMERAS
 Filters
Month and year
 
  4  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2017
Digital Library: EI
Published Online: January  2017
  9  0
Image
Pages 7 - 13,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

We present an algorithm to get high-speed video using camera array with good perceptual quality in realistic scenes that may have clutter and complex background. We synchronize the cameras such that each captures an image at a different time offset. The algorithm processes the jittery interleaved frames and produces a stabilized video. Our method consists of: synthesis of views from a virtual camera to correct for differences in cameras perspectives, and video compositing to remove remaining artifacts especially around disocclusions. More explicitly, we process the optical flow of the raw video to estimate, for each raw frame, the disparity to the target virtual frame. We input these disparities to content-aware warping to synthesize the virtual views, significantly alleviating the jitter. Yet, while the warping fills the disocclusion holes, the filling may not be coherent temporally, leading to small jitter still visible in static/slow regions around large disocclusions. However, these regions don't benefit from high rate in high-speed video. Therefore, we extract low frame rate regions from only one camera and video composite them with the remaining highly moving regions taken by all cameras. The final video is smooth and efficiently has high frame rate in high motion regions.

Digital Library: EI
Published Online: January  2017
  17  1
Image
Pages 14 - 19,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

We propose a novel hybrid framework for estimating a clean panoramic background from consumer RGB-D cameras. The method explicitly handles moving objects, eliminates distortions observed in traditional 2D stitching methods and adaptively handles errors in input depth maps to avoid errors common in 3D based schemes. It produces a panoramic output which integrates parts of the scene as captured from the different poses of the moving camera and removes moving objects by replacing them with their correct background information in color and depth. A fused and cleaned RGB-D has multiple applications such as virtual reality, video compositing and creative video editing. Existing image stitching methods rely on either color or depth information and thus suffer from perspective distortions or low RGB fidelity. A detailed comparison between traditional and state-of-the-art methods and the proposed framework demonstrates the advantages of fusing 2D and 3D information for panoramic background estimation.

Digital Library: EI
Published Online: January  2017
  31  6
Image
Pages 20 - 25,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

3D cameras that can capture range information, in addition to color information, are increasingly prevalent in the consumer marketplace and available in many consumer mobile imaging platforms. An interesting and important application enabled by 3D cameras is photogrammetry, where the physical distance between points can be computed using captured imagery. However, for consumer photogrammetry to succeed in the marketplace, it needs to meet the accuracy and consistency expectations of users in the real world and perform well under challenging lighting conditions, varying distances of the object from the camera etc. These requirements are exceedingly difficult to meet due to the noisy nature of range data, especially when passive stereo or multi-camera systems are used for range estimation. We present a novel and robust algorithm for point-to-point 3D measurement using range camera systems in this paper. Our algorithm utilizes the intuition that users often specify end points of an object of interest for measurement and that the line connecting the two points also belong to the same object. We analyze the 3D structure of the points along this line using robust PCA and improve measurement accuracy by fitting the endpoints to this model prior to measurement computation. We also handle situations where users attempt to measure a gap such as the arms of a sofa, width of a doorway etc. which violates our assumption. Finally, we test the performance of our proposed algorithm on a dataset of over 1800 measurements collected by humans on the Dell Venue 8 tablet with Intel RealSense Snapshot technology. Our results show significant improvements in both accuracy and consistency of measurement, which is critical in making consumer photogrammetry a reality in the marketplace.

Digital Library: EI
Published Online: January  2017
  15  2
Image
Pages 26 - 31,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

Many computer vision tasks such as segmentation, stereo matching can be presented as a pixel labeling problem, which can be solved by optimizing a Markov Random Field modeling it. Most methods using this formulation treat every pixel as a node connected to its neighbors. Thus the compute requirements are directly proportional to the image size. For example a 720p image with 4-connectivity leads to 1 million nodes and 2 million edges. This is further scaled by the number of labels. With increasing resolution of cameras the traditional scheme does not scale well due to high compute and memory requirements, especially in mobile devices. Though methods have been proposed to overcome these problems, they still do not achieve high efficiency. In this paper we propose a framework for MRF optimization that significantly reduces the number of nodes through adaptive and intelligent grouping of pixels. This reduces the problem size in general and adapts to the image content. In addition we also propose a hierarchical grouping of labels, allowing for parallelization and thus suitable for modern processing units. We demonstrate this novel framework for the application of RGB-D scene segmentation and show up to 12X speed-up compared to the traditional optimization algorithms.

Digital Library: EI
Published Online: January  2017
  36  10
Image
Pages 32 - 36,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

Accurate characterization (profiling) of a capture system is essential to have the system accurately reproduce the colors in a scene. ISO 17321 [1][2] describe two methods to achieve this calibration. One based on standard reflective targets (chart-based method) and the other on making accurate measurements of the cameras responsivity functions and the spectral power distribution of the deployed illuminant (spectral characterization). The most prominent of the two is the chart-based method for the reason that it involves a simple capture of an inexpensive, standard color pattern (e.g., Macbeth/Xrite Color Checker). However, the results obtained from this method are illuminant specific and are very sensitive to the technique used in the capture process. Lighting non-uniformity on the chart, incorrect framing, and flare can all erroneously affect the results. ISO also recommends a more robust technique, involving the measurement of the camera's responsivity and the spectral power distribution of the capture illuminant. Measurements of these features can require the use of expensive and sophisticated instruments such as monochromators and spectro-radiometers. Both methods involve tradeoffs in cost, ease of use, and most importantly in the accuracy of the final capture system characterization. The results obtained are very sensitive to the technique of capture and precision of measurements of the various parameters involved. The end-user is often left confused asking such questions as, What accuracy is needed in individual measurements?, What are the tradeoffs (particularly in color accuracy) in using the chart-based method vs. the spectral characterization based method?, also, How sensitive is the system to the various parameters? In this study, both of the ISO recommended techniques are utilized for camera calibration on a broad range of professional cameras and illuminants. Such characterization was conducted by approximately ten different users so as to capture the variability of the deployed capture technique. The collected data was used to calculate and quantify the system characterization accuracy using the color inconstancy index for a set of evaluation colors as the metric. Sensitivity analysis techniques were used to attempt to answer the question "How much of an advantage, if any, does the spectral characterization of the camera offer over the chartbased approach?" In answering the question, parameters (and their sensitivities) were identified to most influence the results.

Digital Library: EI
Published Online: January  2017
  26  7
Image
Pages 37 - 45,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

Right now there are at least three publicly known ranking systems for cell phones (CPIQ [IEEE P1858, in preparation, DxOmark, VCX) that try to tell us which camera phone provides the best image quality. Now that IEEE is about to publish the P1858 standard with currently only 6 Image quality parameters the question arises how many parameters are needed to characterize a camera in a current cell phone and how important is each factor for the perceived quality. For testing the importance of a factor the IEEE cellphone image quality group (CPIQ) has created psychophysical studies for all 6 image quality factors that are described in the first version of IEEE P1858. That way a connection between the physical measurement of the image quality aspect and the perceived quality can be made.

Digital Library: EI
Published Online: January  2017
  83  34
Image
Pages 46 - 51,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

Defective pixels degrade the quality of the images produced by digital imagers. If those pixels are not corrected early in the image processing pipeline, demosaicing and filtering operations will cause them to spread and appear as colored clusters that are detrimental to image quality. This paper presents a robust defect pixel detection and correction solution for Bayer imaging systems. The detection mechanism is designed to robustly identify singlets and couplets of hot pixel, cold pixels or mixture of both types, and results in high defect detection rates. The correction mechanism is designed to be detail-preserving and robust to false positives, and results in high image quality. Both mechanisms are computationally cheap and easy to tune. Experimental results demonstrate the aforementioned merits as well as the solution outperformance of conventional correction methods.

Digital Library: EI
Published Online: January  2017
  57  14
Image
Pages 52 - 57,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

We have developed a laser interferometer with the goal of precise measurement of the pixel MTF and pixel crosstalk in camera sensors. One of the advantages of our interferometric method for measuring sensor MTF is that the sinusoidal illumination pattern is formed directly on the sensor rather than beamed through a lens. This allows for a precise measurement of sensor MTF and crosstalk unaltered by the lens. Another advantage is that we measure MTF in a wide range of spatial frequencies reaching high above the Nyquist frequency. We discuss the theory behind the expected and observed sensor performance, and show our experimental results. Our comparison with the Slanted Edge method shows that we have better precision and cover a wider range of frequencies.

Digital Library: EI
Published Online: January  2017
  137  4
Image
Pages 58 - 65,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 15

Time domain continuous imaging (TDCI) centers on the capture and representation of time-varying image data not as a series of frames, but as a compressed continuous waveform per pixel. A high-dynamic-range (HDR) image can be computationally synthesized from TDCI data to represent any virtual exposure interval covered by the waveforms, thus allowing both exposure start time and shutter speed to be selected arbitrarily after capture. This also enables extraction of video with arbitrary framerate and shutter angle. This paper presents the design, and discusses performance, of the first complete, fully open source, infrastructure supporting experimental use of TDCI: TIK (Temporal Imaging from Kentucky or Temporal Image Kontainer). The system not only provides for processing TDCI .tik files, but also allows conventional video files and still image sequences to be converted into TDCI .tik files.

Digital Library: EI
Published Online: January  2017

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]