Spectral information obtained by hyperspectral sensors enables better characterization, identification and classification of the objects in a scene of interest. Unfortunately, several factors have to be addressed in the classification of hyperspectral data, including the acquisition process, the high dimensionality of spectral samples, and the limited availability of labeled data. Consequently, it is of great importance to design hyperspectral image classification schemes able to deal with the issues of the curse of dimensionality, and simultaneously produce accurate classification results, even from a limited number of training data. To that end, we propose a novel machine learning technique that addresses the hyperspectral image classification problem by employing the state-of-the-art scheme of Convolutional Neural Networks (CNNs). The formal approach introduced in this work exploits the fact that the spatio-spectral information of an input scene can be encoded via CNNs and combined with multi-class classifiers. We apply the proposed method on novel dataset acquired by a snapshot mosaic spectral camera and demonstrate the potential of the proposed approach for accurate classification.
The characterization and abstraction of large multivariate time series data often poses challenges with respect to effectiveness or efficiency. Using the example of human motion capture data challenges exist in creating compact solutions that still reflect semantics and kinematics in a meaningful way. We present a visual-interactive approach for the semi-supervised labeling of human motion capture data. Users are enabled to assign labels to the data which can subsequently be used to represent the multivariate time series as sequences of motion classes. The approach combines multiple views supporting the user in the visualinteractive labeling process. Visual guidance concepts further ease the labeling process by propagating the results of supportive algorithmic models. The abstraction of motion capture data to sequences of event intervals allows overview and detail-on-demand visualizations even for large and heterogeneous data collections. The guided selection of candidate data for the extension and improvement of the labeling closes the feedback loop of the semisupervised workflow. We demonstrate the effectiveness and the efficiency of the approach in two usage scenarios, taking visualinteractive learning and human motion synthesis as examples.