Regular
FastTrack
No keywords found
 Filters
Month and year
 
  17  3
Image
Page 010101-1,  © Society for Imaging Science and Technology 2021
Digital Library: JIST
Published Online: January  2021
  47  3
Image
Pages 010401-1 - 010401-10,  © Society for Imaging Science and Technology 2021
Volume 65
Issue 1
Abstract

This article proposes an implementation of an action recognition system, which allows the user to perform operations in real time. The Microsoft Kinect (RGB-D) sensor plays a central role in this system, which provides the skeletal joint information of humans directly. Computationally efficient skeletal joint position features are considered for describing each action. The dynamic time warping algorithm (DTW) is a widely used algorithm in many applications such as similarity sequence search, classification, and speech recognition. It provides the highest accuracy compared to all other algorithms. However, the computational time of the DTW algorithm is a major drawback in real world applications. To speed up the basic DTW algorithm, a novel three-dimensional dynamic time warping (3D-DTW) classification algorithm is proposed in this work. The proposed 3D-DTW algorithm is implemented in both software and field programmable gate array (FPGA) hardware modeling techniques. The performance of the 3D-DTW algorithm is evaluated for 12 actions in which each action is described with the feature vector size of 576 over 32 frames. From our software modeling results, it has been shown that the proposed algorithm performs the action classification accurately. However, the computation time of the 3D-DTW algorithm increases linearly when we increase either the number of actions or the feature vector size of each action. For further speedup, an efficient custom 3D-DTW intellectual property (IP) core is developed using the Xilinx Vivado high-level synthesis (HLS) tool to accelerate the 3D-DTW algorithm in FPGA hardware. The CPU centric software modeling of the 3D-DTW algorithm is compared with its hardware accelerated custom IP core. It has been shown that the developed 3D-DTW Custom IP core computation time is 40 times faster than its software counterpart. As the hardware results are promising, a parallel hardware software co-design architecture is proposed for the Xilinx Zynq-7020 System on Chip (SoC) FPGA for action recognition. The HLS simulation and synthesis results are provided to support the practical implementation of the proposed architecture. Our proposed approach outperforms many of the existing state-of-the-art DTW based action recognition techniques by providing the highest accuracy of 97.77%.

Digital Library: JIST
Published Online: January  2021
  78  8
Image
Pages 010501-1 - 010501-9,  © Society for Imaging Science and Technology 2021
Volume 65
Issue 1
Abstract

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.

Digital Library: JIST
Published Online: January  2021
  26  3
Image
Pages 010502-1 - 010502-11,  © Society for Imaging Science and Technology 2021
Volume 65
Issue 1
Abstract

This article presents a super-resolution (SR) method dedicated to tomographic imaging, where an image is reconstructed from projections obtained with low-resolution detectors. In this work, upscaling the image resolution is performed by backprojecting the projection measurements into the high-resolution image space modeled on a finer grid. Since this upscaling process often creates irregular pixels, it is important to employ regularizers that can reduce the irregular pixels while preserving fine details. Here we consider two different types of regularizers, non-local and local regularizers, each of which has been independently used for image reconstruction and is known to have its own advantages and disadvantages depending on the edge structures in the underlying image. To achieve a good compromise between the two types of regularizers, we selectively combine them using a space-variant weighting factor, which is systematically determined by our own criterion to classify edges. The experimental results show that our proposed SR method improves the reconstruction accuracy in various image quality assessments and has the potential to be useful in a wide range of imaging applications.

Digital Library: JIST
Published Online: January  2021
  47  2
Image
Pages 010503-1 - 010503-13,  © Society for Imaging Science and Technology 2021
Volume 65
Issue 1
Abstract

The efficiency of a texture classification procedure depends on the color space in which it is performed. Classification in a perceptually meaningful space requires chromatic coordinates obtained from a calibrated acquisition setup. The authors assess the impact of camera calibration, within a generic color picture acquisition workflow, on the performance of a number of texture classification techniques. An image calibration pipeline is established and applied to a texture database, and the accuracy of the classification algorithms is evaluated for each step. The results show that the most significant step of the workflow is color rendering although the effect is relatively small. Hence precise scene-referred characterization of the raw data from an acquisition camera is not essential for most texture classification tasks. In addition, working with output-referred RGB data is likely to be adequate for the majority of classification tasks.

Digital Library: JIST
Published Online: January  2021
  39  2
Image
Pages 010504-1 - 010504-10,  © Society for Imaging Science and Technology 2021
Volume 65
Issue 1
Abstract

Radiation information is essential to land cover classification, but general deep convolutional neural networks (DCNNs) hardly use this to advantage. Additionally, the limited amount of available remote sensing data restricts the efficiency of DCNN models though this can be overcome by data augmentation. However, normal data augmentation methods, which only involve operations such as rotation and translation, have little effect on radiation information. These methods ignore the rich information contained in the image data. In this article, the authors propose a feasible feature-based data augmentation method, which extracts spectral features that can reflect radiation information as well as geometric and texture features that can reflect image information prior to augmentation. Through feature extraction, this method indirectly enhances radiation information and increases the utilization of image information. Classification accuracies show an improvement from 80.20% to 89.20%, which further verifies the effectiveness of this method.

Digital Library: JIST
Published Online: January  2021
  31  1
Image
Pages 010505-1 - 010505-15,  © Society for Imaging Science and Technology 2021
Volume 65
Issue 1
Abstract

To enhance the video transmission security of a physical layer over wireless optical communication, an optical switch configured in front of an arrayed waveguide grating (AWG) router is proposed for implementing reconfigurable wavelength hopping configuration over a wavelength division multiplexing network for protection against eavesdropping. In the practical experiment of an AWG/optical-switch-based random wavelength hopping scheme, a simulated pseudorandom noise generator algorithm was designed and embedded into a master–slave microprocessor (e.g., Arduino chips) to generate a time series of electrical signals for triggering an optical switch. The path of the optical switch was randomly varied to change the space transmission of the AWG router. Therefore, varying wavelengths were assigned as the carriers of authorized users to realize the AWG/optical-switch-based wavelength hopping scheme. An optical spectrum analyzer and an oscilloscope were used for monitoring measurement. The experimental results indicated that eavesdroppers cannot accurately interpret analog, digital, audio, and uncompressed high-definition multimedia interface signals of 10 MHz, 1 MHz, 3.125 MHz (encoded into 6.25 MHz), and 1.485 GHz, respectively. According to the experimental results, authorized and unauthorized users are characterized by a large difference in the retrieved energy signal, which ensures the safety and privacy of the proposed AWG/optical-switch-based reconfigurable wavelength hopping scheme over a wireless optical communication network.

Digital Library: JIST
Published Online: January  2021