Regular
activation functionsadditional JPEG compression
backward
CNN-based video featurescombined metrics
deep learningDNN
entropy coding
full-reference image visual quality assessmentforwardfeature selection
greedy
heuristic
JPEG compression
multilabel graph-cut
over-segmentation
recursive group coding
supervoxelsupervised
video segmentation
 Filters
Month and year
 
  26  2
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

The paper presents a new low complexity edge-directed image interpolation algorithm. The algorithm uses structure tensor analysis to distinguish edges from textured areas and to find local structure direction vectors. The vectors are quantized into 6 directions. Individual adaptive interpolation kernels are used for each direction. Experimental results show high performance of the proposed method without introducing artifacts.

Digital Library: EI
Published Online: February  2016
  23  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Digital Library: EI
Published Online: February  2016
  41  10
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

A single-camera one-shot multispectral imaging system with multispectral filter array (MSFA) has the potential to promote fast and low-cost multispectral imaging. Because the restoration accuracy of the image acquired with this imaging system depends on MSFA, optimization for MSFA has been demanded. Conventional optimizing methods have designed the versatile MSFA without dependence on the imaging object. In the field that requires multispectral images, an imaging object is often determined before designing MSFA. If MSFA can be designed based on the imaging object, the restoration accuracy becomes higher than the versatile MSFA of the conventional methods. In this paper, we propose an optimizing method for MSFA that uses the training data of the imaging object. The proposed method optimizes the observation wavelength and filter arrangement based on simulated annealing. The experiment results demonstrate that the proposed method outperforms conventional methods quantitatively.

Digital Library: EI
Published Online: February  2016
  35  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

The knowledge of the user Quality of Experience (QoE) of the accessed services is of crucial importance for the robust design and adaptation of multimedia streaming networks. In this article, a video QoE estimation metric for video streaming services is presented. The proposed metric does not require information on the original video or the impairments affecting the communication channels. Results show that the proposed method can efficiently estimate the video QoE.

Digital Library: EI
Published Online: February  2016
  31  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

This paper proposes an image fusion method for a single sensor based RGB+NIR (near infrared) MFA (multi-spectral filter array) sensor system. Unlike conventional color filter arrays, Bayer patterns, MFA sensors can receive not only visible band information but also NIR band information by using a single sensor system, which does not need to be registered. The main reasons for using MFA sensors are to increase object discrimination in the target images and improve the brightness of visible band images in extremely low light conditions by using NIR bands. As described in this paper, the fusion method consists of resolution fusion and color fusion, both of which are based on the texture decomposition method. In the experimental results, we compared the visible band, which is the input image for the fusion method, and fused the output image with the NIR band. Fusion results showed higher object discrimination and produced less noise than input visible images.

Digital Library: EI
Published Online: February  2016
  39  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

In this article we present a method for inpainting background parts of scene in videos. Such inpainted background view can be used for rendering stereoscopic views of high quality. Completion of background occluded by foreground object is necessary to solve the parallax occlusion. Known inpainting algorithms, e.g. Criminisi, 2004 [1] or Telea, 2004 [2] were primarily designed for inpainting of still images. However, frame-by-frame spatial inpainting of video inevitably leads to flickering. Our hybrid inpaint algorithm utilizes the fact that some parts of background can be seen in temporally close frames, and some can only be filled-in with spatial inpainting approaches. The algorithm considers foreground masks and video frames as input, performs motion analysis to determine areas where spatial and temporal inpainting should be used and then uses both approaches to find a plausible reconstruction of background. The resulting output is a video sequence with marked foreground object(s) removed.

Digital Library: EI
Published Online: February  2016
  70  12
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

After skin cancer, prostate cancer is the most common cancer in American men. This paper introduces a new database which consists of a large sample size of patients gathered using multispectral photoacoustic imaging. As an alternate to the standard two class labeling (malignant, normal), our voxel based ground truth diagnosis consists of three classes (malignant, benign, normal). We explore deep neural nets, experiment with three popular activation functions, and perform different sub–feature group analysis. Our initial results serve as a benchmark on this database. Greedy based feature selection recognizes and eliminates noisy features. Ablation feature ranking at the feature and group level can simplify clinician effort and results are contrasted with medical literature. Our database is made freely available to the scientific community.

Digital Library: EI
Published Online: February  2016
  19  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

Image denoising is commonly regarded as a problem of fundamental importance in imaging sciences. The last few decades have witnessed the advent of a wide spectrum of denoising algorithms, capable of dealing with noises and images of various types and statistical natures. It is usually the case, however, that the effectiveness of a given denoising procedure and the complexity of its numerical implementation increase pro rata, which is often the reason why more advanced solutions are avoided in situations when data images have relatively large sizes and/or acquired at high frame rates. As a result, substantial efforts have been recently extended to develop efficient means of image denoising, the computational complexity of which would be comparable to that of standard linear filtering. One of such solutions is Guided Image Filtering (GIF) - a recently proposed denoising technique, which combines outstanding performance characteristics with real-time implementability. Unfortunately, the standard implementation of GIF is known to perform poorly in situations when noise statistics deviate from that of additive Gaussian noise. To overcome this deficiently, in this note, we propose a number of modifications to the filter, which allow it to achieve stable and accurate results in the case of impulse and Poisson noises.

Digital Library: EI
Published Online: February  2016
  22  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

A task of alternative (faster and more efficient in compression ratio sense) coding of discrete cosine transform (DCT) coefficients within JPEG based image compression approach is considered. In the data processing chain, it is proposed to apply a recursive group coding (RGC) as an alternative to arithmetic or Huffmann coding. In contrast to the aforementioned data coding techniques, the RGC method is able to efficiently code symbols of very large alphabets (each block of 8x8 pixels of quantized DCT coefficients can be represented as such 64-byte or 128-byte symbol). Comparative analysis of efficiency for the standard JPEG and its proposed modification (for three images of three different digital cameras) is carried out using six different quantization tables. It is shown that RCG possesses low computational complexity and a high speed of compression simultaneously with higher compression ratio (CR) compared to the standard JPEG. The benefit in CR appears to be larger for smaller quantization steps (QSs) that mainly correspond to SHQ (super high quality) mode. This benefit can reach up to 10%. It is also demonstrated that the benefits exists for uniform quantization tables. The proposed coding method can be used for an additional compression of JPEG-images in coding traffic of communication lines. For this purpose, data of JPEG-images have to be partly decoded (till the level of the quantized DCT coefficients) and then recoded by RGC.

Digital Library: EI
Published Online: February  2016
  137  6
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 15

Towards the actualization of an autonomous guitar teaching system, this paper proposes the following two video analysis based methods: (1) pressed chord recognition and (2) fingertip tracking. For (1), an algorithm that can extract finger contours and chord changes is proposed so that the chords pressed by the guitar player are recognized. For (2), an algorithm that can track the fingertips by continuously monitoring the appearance and disappearance of the regions of fingertip candidates is proposed. Experimental results demonstrate that the proposed two modules are robust enough under complex contexts such as complicated background and different illumination conditions. Promising results were obtained for accurate tracking of fingertips and for accurate recognition of pressed chords.

Digital Library: EI
Published Online: February  2016

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]