Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Constructing Word Meaning without Latent Representations using Spreading Activation

Full metadata record
DC Field Value Language
dc.contributor.authorShabahang, Kevin D.-
dc.contributor.authorYim, Hyung wook-
dc.contributor.authorDennis, Simon J.-
dc.date.accessioned2023-02-21T06:04:01Z-
dc.date.available2023-02-21T06:04:01Z-
dc.date.created2023-02-08-
dc.date.issued2022-07-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182399-
dc.description.abstractModels of word meaning, like the Topics model (Griffiths et al., 2007) and word2vec (Mikolov et al., 2013), condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy etc.). However, their reliance on latent representations leaves them vulnerable to interference and makes them slow learners. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement our spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains such as natural language text. After fixing the corpus across models, we show that spreading activation using a Dynamic-Eigen-Net outperforms the Topics model and word2vec in several cases when predicting human free associations and word similarity ratings. We argue in favour of the Dynamic-Eigen-Net as a fast learner that is not subject to catastrophic interference, and present it as an example of delegating the induction of latent relationships to process assumptions instead of assumptions about representation.-
dc.language영어-
dc.language.isoen-
dc.publisherThe Cognitive Science Society-
dc.titleConstructing Word Meaning without Latent Representations using Spreading Activation-
dc.typeArticle-
dc.contributor.affiliatedAuthorYim, Hyung wook-
dc.identifier.scopusid2-s2.0-85146416209-
dc.identifier.bibliographicCitationProceedings of the 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022, pp.3384 - 3390-
dc.relation.isPartOfProceedings of the 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022-
dc.citation.titleProceedings of the 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022-
dc.citation.startPage3384-
dc.citation.endPage3390-
dc.type.rimsART-
dc.type.docTypeConference Paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordPlusChemical activation-
dc.subject.keywordPlusNatural language processing systems-
dc.subject.keywordPlusNetwork layers-
dc.subject.keywordPlusRecurrent neural networks-
dc.subject.keywordPlusSemantics-
dc.subject.keywordPlusAssociative-
dc.subject.keywordPlusCo-occurrence statistics-
dc.subject.keywordPlusCreative Commons-
dc.subject.keywordPlusHyponymy-
dc.subject.keywordPlusProcess-models-
dc.subject.keywordPlusRetrieval-
dc.subject.keywordPlusSpreading activations-
dc.subject.keywordPlusTopic Modeling-
dc.subject.keywordPlusWord-
dc.subject.keywordPlusWord meaning-
dc.subject.keywordAuthorAssociative-
dc.subject.keywordAuthorDynamic-
dc.subject.keywordAuthorProcess model-
dc.subject.keywordAuthorRetrieval-
dc.subject.keywordAuthorSemantic-
dc.subject.keywordAuthorWords-
dc.identifier.urlhttps://escholarship.org/uc/item/0s6590mt-
Files in This Item
Go to Link
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Yim, Hyung wook photo

Yim, Hyung wook
COLLEGE OF ENGINEERING (서울 심리뇌과학전공)
Read more

Altmetrics

Total Views & Downloads

BROWSE