Back to articles
Volume: 30 | Article ID: art00002
CrossEncoders: A complex neural network compression framework
  DOI :  10.2352/ISSN.2470-1173.2018.2.VIPC-153  Published OnlineJanuary 2018

We propose a novel architecture based on the strucuture of AutoEncoders. The paper introduces CrossEncoders - an AutoEncoder architecture which uses cross-connections to connect layers (both adjacent and non-adjacent) in the encoder and decoder side of the network respectively. The network incorporates both global and local information in the lower dimension code. We aim for an image compression algorithm that has reduced training time and better generalization property. The use of cross-connections makes the training of our network significantly faster. The performance of the proposed framework has been evaluated using real-world data from highly competitive datasets like MNIST and CIFAR-10. Furthermore, we show that the proposed architecture provides high compression ratio and is robust as compared to previously proposed architectures and PCA. The results were validated using metrics, such as PSNR-HVS and PSNR-HVS-M respectively.

Subject Areas :
Views 16
Downloads 1
 articleview.views 16
 articleview.downloads 1
  Cite this article 

Chirag Agarwal, Mehdi Sharifzadeh, Dan Schonfeld, "CrossEncoders: A complex neural network compression frameworkin Proc. IS&T Int’l. Symp. on Electronic Imaging: Visual Information Processing and Communication IX,  2018,  pp 153-1 - 153-5,

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2018
Electronic Imaging
Society for Imaging Science and Technology