SPSNN: < italic > n </italic > th Order Sequence-Predicting Spiking Neural Networkopen access
- Authors
- Kim, Dohun; Kornijcuk, Vladimir; Hwang, Cheol Seong; Jeong, Doo Seok
- Issue Date
- May-2020
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Neurons; Prediction algorithms; Biological neural networks; Heuristic algorithms; Training; Neuromorphics; Physiology; Sequence-predicting spiking neural network; event-driven learning algorithm of locality; sequence learning; single-step prediction; associative recall
- Citation
- IEEE ACCESS, v.8, pp.110523 - 110534
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE ACCESS
- Volume
- 8
- Start Page
- 110523
- End Page
- 110534
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/1945
- DOI
- 10.1109/ACCESS.2020.3001296
- Abstract
- We introduce a means of harnessing spiking neural networks (SNNs) with rich dynamics as a dynamic hypothesis to learn complex sequences. The proposed SNN is referred to as nth order sequence-predicting SNN (n-SPSNN), which is capable of single-step prediction and sequence-to-sequence prediction, i.e., associative recall. As a key to these capabilities, we propose a new learning algorithm, named the learning by backpropagating action potential (LbAP) algorithm, which features (i) postsynaptic event-driven learning, (ii) access to topological and temporal local data only, (iii) competition-induced weight normalization effect, and (iv) fast learning. Most importantly, the LbAP algorithm offers a unified learning framework over the entire SPSNN based on local data only. The learning capacity of the SPSNN is mainly dictated by the number of hidden neurons h; its prediction accuracy reaches its maximum value (similar to 1) when the hidden neuron number h is larger than twice training sequence length l, i.e., h >= 2l. Another advantage is its high tolerance to errors in input encoding compared to the state-of-the-art sequence learning networks, namely long short-term memory (LSTM) and gated recurrent unit (GRU). Additionally, its efficiency in learning is approximately 100 times that of LSTM and GRU when measured in terms of the number of synaptic operations until successful training, which corresponds to multiply-accumulate operations for LSTM and GRU. This high efficiency arises from the higher learning rate of the SPSNN, which is attributed to the LbAP algorithm. The code is available on-line (https://github.com/galactico7/SPSNN).
- Files in This Item
-
- Appears in
Collections - 서울 공과대학 > 서울 신소재공학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.