Back to articles
Volume: 31 | Article ID: art00018
Deep dimension reduction for spatial-spectral road scene classification
  DOI :  10.2352/ISSN.2470-1173.2019.15.AVM-049  Published OnlineJanuary 2019

Semantic segmentation is an essential aspect of modern autonomous driving systems, since a precise understanding of the environment is crucial for navigation. We investigate the eligibility of novel snapshot hyperspectral cameras ?which capture a whole spectrum in one shot? for road scene classification. Hyperspectral data brings an advantage, as it allows a better analysis of the material properties of objects in the scene. Unfortunately, most classifiers suffer from the Hughes effect when dealing with high-dimensional hyperspectral data. Therefore we propose a new framework of hyperspectral-based feature extraction and classification. The framework utilizes a deep autoencoder network with additional regularization terms which focus on the modeling of latent space rather than the reconstruction error to learn a new dimension-reduced representation. This new dimension-reduced spectral feature space allows the use of deep learning architectures already established on RGB datasets.

Subject Areas :
Views 24
Downloads 5
 articleview.views 24
 articleview.downloads 5
  Cite this article 

Christian Winkens, Florian Sattler, Dietrich Paulus, "Deep dimension reduction for spatial-spectral road scene classificationin Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines Conference,  2019,  pp 49-1 - 49-9,

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2019
Electronic Imaging
Society for Imaging Science and Technology
7003 Kilworth Lane, Springfield, VA 22151 USA