LaCERA: Layer-centric event-routing architecture
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ye, ChangMin | - |
dc.contributor.author | Kornijcuk, Vladimir | - |
dc.contributor.author | Yoo, DongHyung | - |
dc.contributor.author | Kim, Jeeson | - |
dc.contributor.author | Jeong, Doo Seok | - |
dc.date.accessioned | 2022-12-20T04:52:31Z | - |
dc.date.available | 2022-12-20T04:52:31Z | - |
dc.date.created | 2022-12-07 | - |
dc.date.issued | 2023-02 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172728 | - |
dc.description.abstract | Neuromorphic processors are hardware dedicated to spiking neural networks (SNNs) to accelerate SNN operations with low-power consumption. Early proposed digital neuromorphic processors define SNN topology in a neuron-centric manner in full support of topology reconfiguration. However, this high reconfigurability comes at the cost of large memory usage, and the state-of-the-art SNN topology barely needs such high reconfigurability as for convolutional SNNs (Conv-SNNs). Further, neuron-centric routing methods hardly allow weight-reuse for Conv-SNNs. To address these concerns, we propose the layer-centric event-routing architecture (LaCERA) that uses layers (or sub-layers) as the granularity of topology unlike neuron-centric routing methods. LaCERA supports the high reconfigurability of Conv-SNN topology and high efficiency in memory usage given the use of lightweight lookup tables for event-routing and high weight-reuse rate. To evaluate LaCERA, we implemented a neuromorphic processor with 32 cores, each of which employs LaCERA, in a field-programmable gate array. The evaluation on the processor level highlights (i) almost ideal weight-reuse rate for Conv-SNNs, (ii) high efficiency in event-routing memory usage, ca. 100× that of Loihi, and (iii) high flexibility of layer partitioning into sub-layers over multiple cores. Further, our neuromorphic processor achieved approximately a 10× improvement in inference speed compared with graphics processing units (TITAN RTX and RTX A6000). | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Elsevier B.V. | - |
dc.title | LaCERA: Layer-centric event-routing architecture | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Jeong, Doo Seok | - |
dc.identifier.doi | 10.1016/j.neucom.2022.11.046 | - |
dc.identifier.scopusid | 2-s2.0-85142690813 | - |
dc.identifier.wosid | 000904786300005 | - |
dc.identifier.bibliographicCitation | Neurocomputing, v.520, pp.46 - 59 | - |
dc.relation.isPartOf | Neurocomputing | - |
dc.citation.title | Neurocomputing | - |
dc.citation.volume | 520 | - |
dc.citation.startPage | 46 | - |
dc.citation.endPage | 59 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | Computer graphics | - |
dc.subject.keywordPlus | Convolution | - |
dc.subject.keywordPlus | Convolutional neural networks | - |
dc.subject.keywordPlus | Efficiency | - |
dc.subject.keywordPlus | Field programmable gate arrays (FPGA) | - |
dc.subject.keywordPlus | Graphics processing unit | - |
dc.subject.keywordPlus | Logic gates | - |
dc.subject.keywordPlus | Memory architecture | - |
dc.subject.keywordPlus | Multilayer neural networks | - |
dc.subject.keywordPlus | Network architecture | - |
dc.subject.keywordPlus | Network routing | - |
dc.subject.keywordPlus | Neurons | - |
dc.subject.keywordPlus | Program processors | - |
dc.subject.keywordPlus | Table lookup | - |
dc.subject.keywordPlus | Convolutional spiking neural network | - |
dc.subject.keywordPlus | Digital neuromorphic processor | - |
dc.subject.keywordPlus | Event routing | - |
dc.subject.keywordPlus | Layer-centric event-routing architecture | - |
dc.subject.keywordPlus | Memory efficient | - |
dc.subject.keywordPlus | Memory-efficient event-routing | - |
dc.subject.keywordPlus | Neural-networks | - |
dc.subject.keywordPlus | Neuromorphic | - |
dc.subject.keywordPlus | Reuse | - |
dc.subject.keywordPlus | Routing architecture | - |
dc.subject.keywordPlus | Weight-reuse | - |
dc.subject.keywordPlus | article | - |
dc.subject.keywordPlus | memory | - |
dc.subject.keywordPlus | nerve cell | - |
dc.subject.keywordPlus | spiking neural network | - |
dc.subject.keywordPlus | velocity | - |
dc.subject.keywordPlus | Topology | - |
dc.subject.keywordAuthor | Convolutional spiking neural network | - |
dc.subject.keywordAuthor | Digital neuromorphic processor | - |
dc.subject.keywordAuthor | Layer-centric event-routing architecture | - |
dc.subject.keywordAuthor | Memory-efficient event-routing | - |
dc.subject.keywordAuthor | Weight-reuse | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0925231222014345?via%3Dihub | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.