Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Generalization at Retrieval Using Associative Networks with Transient Weight Changes

Full metadata record
DC Field Value Language
dc.contributor.authorShabahang, Kevin D.-
dc.contributor.authorYim, Hyungwook-
dc.contributor.authorDennis, Simon J.-
dc.date.accessioned2022-07-06T08:36:45Z-
dc.date.available2022-07-06T08:36:45Z-
dc.date.created2022-04-06-
dc.date.issued2022-03-
dc.identifier.issn2522-087X-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/139279-
dc.description.abstractWithout having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets, and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.-
dc.language영어-
dc.language.isoen-
dc.publisherSpringer-
dc.titleGeneralization at Retrieval Using Associative Networks with Transient Weight Changes-
dc.typeArticle-
dc.contributor.affiliatedAuthorYim, Hyungwook-
dc.identifier.doi10.1007/s42113-022-00127-4-
dc.identifier.scopusid2-s2.0-85125639593-
dc.identifier.bibliographicCitationComputational Brain and Behavior, v.5, no.1, pp.124 - 155-
dc.relation.isPartOfComputational Brain and Behavior-
dc.citation.titleComputational Brain and Behavior-
dc.citation.volume5-
dc.citation.number1-
dc.citation.startPage124-
dc.citation.endPage155-
dc.type.rimsART-
dc.type.docTypeArticle in Press-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordAuthorAuto-associative-
dc.subject.keywordAuthorContent addressable memory-
dc.subject.keywordAuthorGeneralization-
dc.subject.keywordAuthorPattern-completion-
dc.subject.keywordAuthorRecurrent neural network-
dc.subject.keywordAuthorShort-term-plasticity-
dc.identifier.urlhttps://link.springer.com/article/10.1007/s42113-022-00127-4-
Files in This Item
Go to Link
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Yim, Hyung wook photo

Yim, Hyung wook
COLLEGE OF ENGINEERING (서울 심리뇌과학전공)
Read more

Altmetrics

Total Views & Downloads

BROWSE