Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Generalization at Retrieval Using Associative Networks with Transient Weight Changes

Authors
Shabahang, Kevin D.Yim, HyungwookDennis, Simon J.
Issue Date
Mar-2022
Publisher
Springer
Keywords
Auto-associative; Content addressable memory; Generalization; Pattern-completion; Recurrent neural network; Short-term-plasticity
Citation
Computational Brain and Behavior, v.5, no.1, pp.124 - 155
Indexed
SCOPUS
Journal Title
Computational Brain and Behavior
Volume
5
Number
1
Start Page
124
End Page
155
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/139279
DOI
10.1007/s42113-022-00127-4
ISSN
2522-087X
Abstract
Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets, and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.
Files in This Item
Go to Link
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Yim, Hyung wook photo

Yim, Hyung wook
COLLEGE OF ENGINEERING (서울 심리뇌과학전공)
Read more

Altmetrics

Total Views & Downloads

BROWSE