A neural network accelerator for mobile application processors
- Authors
- Kim D.Y.[Kim D.Y.]; Kim J.M.[Kim J.M.]; Jang H.[Jang H.]; Jeong J.[Jeong J.]; Lee J.W.[Lee J.W.]
- Issue Date
- 2015
- Keywords
- hardware accelerator; lowpower; neural network; scheduling
- Citation
- IEEE Transactions on Consumer Electronics, v.61, no.4, pp.555 - 563
- Journal Title
- IEEE Transactions on Consumer Electronics
- Volume
- 61
- Number
- 4
- Start Page
- 555
- End Page
- 563
- URI
- https://scholarworks.bwise.kr/skku/handle/2021.sw.skku/49310
- DOI
- 10.1109/TCE.2015.7389812
- Abstract
- Today's mobile consumer electronics devices, such as smartphones and tablets, are required to execute a wide variety of applications efficiently. To this end modern application processors integrate both general-purpose CPU cores and specialized accelerators. Energy efficiency is the primary design goal for those processors, which has recently rekindled interest in neural network accelerators. Neural network accelerators trade the accuracy of computation for performance and energy efficiency and are suitable for errortolerant media applications such as video and audio processing. However, most existing accelerators only exploit inter-neuron parallelism and leave processing elements underutilized when the number of neurons in a layer is small. Thus, this paper proposes a novel neural network accelerator that can efficiently exploit both inter- and intra-neuron parallelism. For five applications the proposed accelerator achieves average speedups of 126% and 23% over a generalpurpose CPU and a state-of-the-art accelerator exploiting inter-neuron parallelism only, respectively. Besides, the proposed accelerator saves energy consumption by 22% over the state-of-the-art accelerator. © 2015 IEEE.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Information and Communication Engineering > Department of Semiconductor Systems Engineering > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.