Constructing Word Meaning without Latent Representations using Spreading Activation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shabahang, Kevin D. | - |
dc.contributor.author | Yim, Hyung wook | - |
dc.contributor.author | Dennis, Simon J. | - |
dc.date.accessioned | 2023-02-21T06:04:01Z | - |
dc.date.available | 2023-02-21T06:04:01Z | - |
dc.date.created | 2023-02-08 | - |
dc.date.issued | 2022-07 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182399 | - |
dc.description.abstract | Models of word meaning, like the Topics model (Griffiths et al., 2007) and word2vec (Mikolov et al., 2013), condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy etc.). However, their reliance on latent representations leaves them vulnerable to interference and makes them slow learners. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement our spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains such as natural language text. After fixing the corpus across models, we show that spreading activation using a Dynamic-Eigen-Net outperforms the Topics model and word2vec in several cases when predicting human free associations and word similarity ratings. We argue in favour of the Dynamic-Eigen-Net as a fast learner that is not subject to catastrophic interference, and present it as an example of delegating the induction of latent relationships to process assumptions instead of assumptions about representation. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | The Cognitive Science Society | - |
dc.title | Constructing Word Meaning without Latent Representations using Spreading Activation | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Yim, Hyung wook | - |
dc.identifier.scopusid | 2-s2.0-85146416209 | - |
dc.identifier.bibliographicCitation | Proceedings of the 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022, pp.3384 - 3390 | - |
dc.relation.isPartOf | Proceedings of the 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 | - |
dc.citation.title | Proceedings of the 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 | - |
dc.citation.startPage | 3384 | - |
dc.citation.endPage | 3390 | - |
dc.type.rims | ART | - |
dc.type.docType | Conference Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordPlus | Chemical activation | - |
dc.subject.keywordPlus | Natural language processing systems | - |
dc.subject.keywordPlus | Network layers | - |
dc.subject.keywordPlus | Recurrent neural networks | - |
dc.subject.keywordPlus | Semantics | - |
dc.subject.keywordPlus | Associative | - |
dc.subject.keywordPlus | Co-occurrence statistics | - |
dc.subject.keywordPlus | Creative Commons | - |
dc.subject.keywordPlus | Hyponymy | - |
dc.subject.keywordPlus | Process-models | - |
dc.subject.keywordPlus | Retrieval | - |
dc.subject.keywordPlus | Spreading activations | - |
dc.subject.keywordPlus | Topic Modeling | - |
dc.subject.keywordPlus | Word | - |
dc.subject.keywordPlus | Word meaning | - |
dc.subject.keywordAuthor | Associative | - |
dc.subject.keywordAuthor | Dynamic | - |
dc.subject.keywordAuthor | Process model | - |
dc.subject.keywordAuthor | Retrieval | - |
dc.subject.keywordAuthor | Semantic | - |
dc.subject.keywordAuthor | Words | - |
dc.identifier.url | https://escholarship.org/uc/item/0s6590mt | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.