Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Gating Mechanism in Deep Neural Networks for Resource-Efficient Continual Learning

Full metadata record
DC Field Value Language
dc.contributor.authorJin, H.-
dc.contributor.authorYun, K.-
dc.contributor.authorKim, Eun Woo-
dc.date.accessioned2023-03-08T09:31:53Z-
dc.date.available2023-03-08T09:31:53Z-
dc.date.issued2022-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/61910-
dc.description.abstractCatastrophic forgetting is well-known tendency in continual learning of a deep neural network to forget previously learned knowledge when optimizing for sequentially incoming tasks. To address the issue, several methods have been proposed in research on continual learning. However, theses methods cannot preserve the previously learned knowledge when training for a new task. Moreover, these methods are susceptible to negative interference between tasks, which may lead to catastrophic forgetting. It even becomes increasingly severe when there exists a notable gap between the domains of tasks. This paper proposes a novel method of controlling gates to select a subset of parameters learned for old tasks, which are then used to efficiently optimize a new task while avoiding negative interference. The proposed approach executes the subset of old parameters that provides positive responses by evaluating the effect when the old and new parameters are used together. The execution or skipping of old parameters through the gates is based on several responses across the network. We evaluate the proposed method in different continual learning scenarios involving image classification datasets. The proposed method outperforms other competitive methods and requires fewer parameters than the state-of-the-art methods during inference by applying the proposed gating mechanism that selectively involves a set of old parameters that provides positive prior knowledge to newer tasks. Additionally, we further prove the effectiveness of the proposed method through various analyses. Author-
dc.format.extent11-
dc.language영어-
dc.language.isoENG-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleGating Mechanism in Deep Neural Networks for Resource-Efficient Continual Learning-
dc.typeArticle-
dc.identifier.doi10.1109/ACCESS.2022.3147237-
dc.identifier.bibliographicCitationIEEE Access, v.10, pp 18776 - 18786-
dc.description.isOpenAccessY-
dc.identifier.wosid000760720400001-
dc.identifier.scopusid2-s2.0-85124093534-
dc.citation.endPage18786-
dc.citation.startPage18776-
dc.citation.titleIEEE Access-
dc.citation.volume10-
dc.type.docTypeArticle-
dc.publisher.location미국-
dc.subject.keywordAuthorContinual learning-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorgating mechanism-
dc.subject.keywordAuthorInterference-
dc.subject.keywordAuthorLogic gates-
dc.subject.keywordAuthorNeural networks-
dc.subject.keywordAuthorResource management-
dc.subject.keywordAuthorresource-efficient learning-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthortask interference-
dc.subject.keywordAuthorTraining-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
Appears in
Collections
College of Software > School of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Eun Woo photo

Kim, Eun Woo
소프트웨어대학 (소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE