Back to articles
Volume: 29 | Article ID: art00014
Prune the Convolutional Neural Networks with Sparse Shrink
  DOI :  10.2352/ISSN.2470-1173.2017.6.MOBMU-306  Published OnlineJanuary 2017

Nowadays, it is still difficult to adapt Convolutional Neural Network (CNN) based models for deployment on embedded devices. The heavy computation and large memory footprint of CNN models become the main burden in real application. In this paper, we propose a "Sparse Shrink" algorithm to prune an existing CNN model. By analyzing the importance of each channel via sparse reconstruction, the algorithm is able to prune redundant feature maps accordingly. The resulting pruned model thus directly saves computational resource. We have evaluated our algorithm on CIFAR-100. As shown in our experiments, we can reduce 56.77% parameters and 73.84% multiplication in total with only minor decrease in accuracy. These results have demonstrated the effectiveness of our "Sparse Shrink" algorithm.

Subject Areas :
Views 12
Downloads 0
 articleview.views 12
 articleview.downloads 0
  Cite this article 

Xin Li, Changsong Liu, "Prune the Convolutional Neural Networks with Sparse Shrinkin Proc. IS&T Int’l. Symp. on Electronic Imaging: Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications,  2017,  pp 97 - 101,

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2017
Electronic Imaging
Society for Imaging Science and Technology