CENNA: Cost-Effective Neural Network Acceleratoropen access
- Authors
- Park, Sang-Soo; Chung, Ki Seok
- Issue Date
- Jan-2020
- Publisher
- MDPI
- Keywords
- convolutional neural network (CNN); neural network accelerator; neural processing unit (NPU); CNN inference
- Citation
- ELECTRONICS, v.9, no.1, pp.1 - 19
- Indexed
- SCIE
SCOPUS
- Journal Title
- ELECTRONICS
- Volume
- 9
- Number
- 1
- Start Page
- 1
- End Page
- 19
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/2092
- DOI
- 10.3390/electronics9010134
- ISSN
- 2079-9292
- Abstract
- Convolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key for high-performance CNNs. In this study, we propose a cost-effective neural network accelerator, named CENNA, whose hardware cost is reduced by employing a cost-centric matrix multiplication that employs both Strassen's multiplication and a naive multiplication. Furthermore, the convolution method using the proposed matrix multiplication can minimize data movement by reusing both the feature map and the convolution kernel without any additional control logic. In terms of throughput, power consumption, and silicon area, the efficiency of CENNA is up to 88 times higher than that of conventional designs for the CNN inference.
- Files in This Item
-
- Appears in
Collections - 서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.