LaCERA: Layer-centric event-routing architecture
- Authors
- Ye, ChangMin; Kornijcuk, Vladimir; Yoo, DongHyung; Kim, Jeeson; Jeong, Doo Seok
- Issue Date
- Feb-2023
- Publisher
- Elsevier B.V.
- Keywords
- Convolutional spiking neural network; Digital neuromorphic processor; Layer-centric event-routing architecture; Memory-efficient event-routing; Weight-reuse
- Citation
- Neurocomputing, v.520, pp.46 - 59
- Indexed
- SCOPUS
- Journal Title
- Neurocomputing
- Volume
- 520
- Start Page
- 46
- End Page
- 59
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172728
- DOI
- 10.1016/j.neucom.2022.11.046
- ISSN
- 0925-2312
- Abstract
- Neuromorphic processors are hardware dedicated to spiking neural networks (SNNs) to accelerate SNN operations with low-power consumption. Early proposed digital neuromorphic processors define SNN topology in a neuron-centric manner in full support of topology reconfiguration. However, this high reconfigurability comes at the cost of large memory usage, and the state-of-the-art SNN topology barely needs such high reconfigurability as for convolutional SNNs (Conv-SNNs). Further, neuron-centric routing methods hardly allow weight-reuse for Conv-SNNs. To address these concerns, we propose the layer-centric event-routing architecture (LaCERA) that uses layers (or sub-layers) as the granularity of topology unlike neuron-centric routing methods. LaCERA supports the high reconfigurability of Conv-SNN topology and high efficiency in memory usage given the use of lightweight lookup tables for event-routing and high weight-reuse rate. To evaluate LaCERA, we implemented a neuromorphic processor with 32 cores, each of which employs LaCERA, in a field-programmable gate array. The evaluation on the processor level highlights (i) almost ideal weight-reuse rate for Conv-SNNs, (ii) high efficiency in event-routing memory usage, ca. 100× that of Loihi, and (iii) high flexibility of layer partitioning into sub-layers over multiple cores. Further, our neuromorphic processor achieved approximately a 10× improvement in inference speed compared with graphics processing units (TITAN RTX and RTX A6000).
- Files in This Item
-
Go to Link
- Appears in
Collections - 서울 공과대학 > 서울 신소재공학부 > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172728)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.