Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Magnitude and Similarity Based Variable Rate Filter Pruning for Efficient Convolution Neural Networksopen access

Authors
Ghimire, DeepakKim, Seong-Heum
Issue Date
Jan-2023
Publisher
MDPI
Keywords
convolution neural network; filter pruning; weight pruning; loss-aware pruning; pruning criterion
Citation
APPLIED SCIENCES-BASEL, v.13, no.1
Journal Title
APPLIED SCIENCES-BASEL
Volume
13
Number
1
URI
http://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/43338
DOI
10.3390/app13010316
ISSN
2076-3417
Abstract
The superior performance of the recent deep learning models comes at the cost of a significant increase in computational complexity, memory use, and power consumption. Filter pruning is one of the effective neural network compression techniques suitable for model deployment in modern low-power edge devices. In this paper, we propose a loss-aware filter Magnitude and Similarity based Variable rate Filter Pruning (MSVFP) technique. We studied several filter selection criteria based on filter magnitude and similarity among filters within a convolution layer, and based on the assumption that the sensitivity of each layer throughout the network is different, unlike conventional fixed rate pruning methods, our algorithm using loss-aware filter selection criteria automatically finds the suitable pruning rate for each layer throughout the network. In addition, the proposed algorithm adapts two different filter selection criteria to remove weak filters as well as redundant filters based on filter magnitude and filter similarity score respectively. Finally, the iterative filter pruning and retraining approach are used to maintain the accuracy of the network during pruning to its target float point operations (FLOPs) reduction rate. In the proposed algorithm, a small number of retraining steps are sufficient during iterative pruning to prevent an abrupt drop in the accuracy of the network. Experiments with commonly used VGGNet and ResNet models on CIFAR-10 and ImageNet benchmark show the superiority of the proposed method over the existing methods in the literature. Notably, VGG-16, ResNet-56, and ResNet-110 models on the CIFAR-10 dataset even improved the original accuracy with more than 50% reduction in network FLOPs. Additionally, the ResNet-50 model on the ImageNet dataset reduces model FLOPs by more than 42% with a negligible drop in the original accuracy.
Files in This Item
Go to Link
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Seongheum photo

Kim, Seongheum
College of Information Technology (Department of Smart Systems Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE