Regular
action recognitionAction Recognition
Building interiors
Convolutional Neural Network
Deep learningdepth camera
inertial sensors
Kinect
Low-cost camera-projector system
Medical ImagingMultiple sclerosismulti modalitiesModel inference metricMRI Segmentation
Pose EstimationPhase-shifting
Quantitative metricQualitative analysis
Structured lightSelf-supervision
Variable precision
3D Scene Reconstruction3D imaging3D encoding3D Compression3D compression3D Data Processing3D Shape Indexing and Retrieval3D Measurement3D reconstruction
 Filters
Month and year
 
  41  5
Image
Pages 2-1 - 2-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 17

To improve the workout efficiency and to provide the body movement suggestions to users in a “smart gym” environment, we propose to use a depth camera for capturing a user’s body parts and mount multiple inertial sensors on the body parts of a user to generate deadlift behavior models generated by a recurrent neural network structure. The contribution of this paper is trifold: 1) The multimodal sensing signals obtained from multiple devices are fused for generating the deadlift behavior classifiers, 2) the recurrent neural network structure can analyze the information from the synchronized skeletal and inertial sensing data, and 3) a Vaplab dataset is generated for evaluating the deadlift behaviors recognizing capability in the proposed method.

Digital Library: EI
Published Online: January  2020
  120  29
Image
Pages 3-1 - 3-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 17

Multiple Sclerosis (MS) is a chronic, often disabling, autoimmune disease affecting the central nervous system and characterized by demyelination and neuropathic alterations. Magnetic Resonance (MR) images plays a pivotal role in the diagnosis and the screening of MS. MR images identify and localize demyelinating lesions (or plaques) and possible associated atrophic lesions whose MR aspect is in relation with the evolution of the disease. We propose a novel MS lesions segmentation method for MR images, based on Convolutional Neural Networks (CNNs) and partial self-supervision and studied the pros and cons of using self-supervision for the current segmentation task. Investigating the transferability by freezing the firsts convolutional layers, we discovered that improvements are obtained when the CNN is retrained from the first layers. We believe such results suggest that MRI segmentation is a singular task needing high level analysis from the very first stages of the vision process, as opposed to vision tasks aimed at day-to-day life such as face recognition or traffic sign classification. The evaluation of segmentation quality has been performed on full image size binary maps assembled from predictions on image patches from an unseen database.

Digital Library: EI
Published Online: January  2020
  22  2
Image
Pages 4-1 - 4-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 17

Activity recognition and pose estimation are ingeneral closely related in practical applications, even though they are considered to be independent tasks. In this paper, we propose an artificial 3D coordinates and CNN that is for combining activity recognition and pose estimation with 2D and 3D static/dynamic images(dynamic images are composed of a set of video frames). In other words, We show that the proposed algorithm can be used to solve two problems, activity recognition and pose estimation. End-to-end optimization process has shown that the proposed approach is superior to the one which exploits the activity recognition and pose estimation seperately. The performance is evaluated by calculating recognition rate. The proposed approach enable us to perform learning procedures using different datasets.

Digital Library: EI
Published Online: January  2020
  28  0
Image
Pages A17-1 - A17-4,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  38  4
Image
Pages 34-1 - 34-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 17

This paper presents a novel method for accurately encoding 3D range geometry within the color channels of a 2D RGB image that allows the encoding frequency—and therefore the encoding precision—to be uniquely determined for each coordinate. The proposed method can thus be used to balance between encoding precision and file size by encoding geometry along a normal distribution; encoding more precisely where the density of data is high and less precisely where the density is low. Alternative distributions may be followed to produce encodings optimized for specific applications. In general, the nature of the proposed encoding method is such that the precision of each point can be freely controlled or derived from an arbitrary distribution, ideally enabling this method for use within a wide range of applications.

Digital Library: EI
Published Online: January  2020
  32  11
Image
Pages 35-1 - 35-9,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  30  4
Image
Pages 36-1 - 36-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 17

Applications ranging from simple visualization to complex design require 3D models of indoor environments. This has given rise to advancements in the field of automated reconstruction of such models. In this paper, we review several state-of-the-art metrics proposed for geometric comparison of 3D models of building interiors. We evaluate their performance on a real-world dataset and propose one tailored metric which can be used to assess the quality of the reconstructed model. In addition, the proposed metric can also be easily visualized to highlight the regions or structures where the reconstruction failed. To demonstrate the versatility of the proposed metric we conducted experiments on various interior models by comparison with ground truth data created by expert Blender artists. The results of the experiments were then used to improve the reconstruction pipeline.

Digital Library: EI
Published Online: January  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]