Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Non-Zero Grid for Accurate 2-Bit Additive Power-of-Two CNN Quantizationopen access

Authors
Kim, Young MinHan, KyunghyunLee, Wai-KongChang, Hyung JinHwang, Seong Oun
Issue Date
Mar-2023
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Quantization (signal); Deep learning; Convolutional neural networks; Gaussian distribution; Mathematical models; Internet of Things; Computational modeling; Quantization; deep learning; convolutional neural network
Citation
IEEE ACCESS, v.11, pp.32051 - 32060
Journal Title
IEEE ACCESS
Volume
11
Start Page
32051
End Page
32060
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/87771
DOI
10.1109/ACCESS.2023.3259959
ISSN
2169-3536
Abstract
Quantization is an effective technique to reduce the memory and computational complexity of CNNs. Recent advances utilize additive powers-of-two to perform non-uniform quantization, which resembles a normal distribution and shows better performance than uniform quantization. With powers-of-two quantization, the computational complexity is also largely reduced because the slow multiplication operations are replaced with lightweight shift operations. However, there are serious problems in the previously proposed grid formulation for 2-bit quantization. In particular, these powers-of-two schemes produce zero values, generating significant training error and causing low accuracy. In addition, due to improper grid formulation, they also fallback to uniform quantization when the quantization level reaches 2-bit. Due to these reasons, on large CNN like ResNet-110, these powers-of-two schemes may not even train properly. To resolve these issues, we propose a new non-zero grid formulation that enables 2-bit non-uniform quantization and allow the CNN to be trained successfully in every attempt, even for a large network. The proposed technique quantizes weight as power-of-two values and projects it close to the mean area through a simple constant product on the exponential part. This allows our quantization scheme to closely resemble a non-uniform quantization at 2-bit, enabling successful training at 2-bit quantization, which is not found in the previous work. The proposed technique achieves 70.57% accuracy on the CIFAR-100 dataset trained with ResNet-110. This result is 6.24% higher than the additive powers-of-two scheme which only achieves 64.33% accuracy. Beside achieving higher accuracy, our work also maintains the same memory and computational efficiency with the original additive powers-of-two scheme.
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 컴퓨터공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Hwang, Seong Oun photo

Hwang, Seong Oun
College of IT Convergence (컴퓨터공학부(컴퓨터공학전공))
Read more

Altmetrics

Total Views & Downloads

BROWSE