Clustering-Guided Incremental Learning of Tasks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Y. | - |
dc.contributor.author | Kim, E. | - |
dc.date.accessioned | 2021-06-18T07:14:21Z | - |
dc.date.available | 2021-06-18T07:14:21Z | - |
dc.date.issued | 2021-01 | - |
dc.identifier.issn | 1976-7684 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/44128 | - |
dc.description.abstract | Incremental deep learning aims to learn a sequence of tasks while avoiding forgetting their knowledge. One naïve approach using a deep architecture is to increase the capacity of the architecture as the number of tasks increases. However, this is followed by heavy memory consumption and makes the approach not practical. If we attempt to avoid such an issue with a fixed capacity, we encounter another challenging problem called catastrophic forgetting, which leads to a notable degradation of performance on previously learned tasks. To overcome these problems, we propose a clustering-guided incremental learning approach that can mitigate catastrophic forgetting while not increasing the capacity of an architecture. The proposed approach adopts a parameter-splitting strategy to assign a subset of parameters in an architecture for each task to prevent forgetting. It uses a clustering approach to discover the relationship between tasks by storing a few samples per task. When we learn a new task, we utilize the knowledge of the relevant tasks together with the current task to improve performance. This approach could maximize the efficiency of the approach realized in a single fixed architecture. Experimental results with a number of fine-grained datasets show that our method outperforms existing competitors. © 2021 IEEE. | - |
dc.format.extent | 5 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Clustering-Guided Incremental Learning of Tasks | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/ICOIN50884.2021.9334003 | - |
dc.identifier.bibliographicCitation | International Conference on Information Networking, v.2021-January, pp 417 - 421 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.wosid | 000657974100073 | - |
dc.identifier.scopusid | 2-s2.0-85100763213 | - |
dc.citation.endPage | 421 | - |
dc.citation.startPage | 417 | - |
dc.citation.title | International Conference on Information Networking | - |
dc.citation.volume | 2021-January | - |
dc.type.docType | Conference Paper | - |
dc.subject.keywordAuthor | catastrophic forgetting | - |
dc.subject.keywordAuthor | clustering | - |
dc.subject.keywordAuthor | deep neural networks | - |
dc.subject.keywordAuthor | Incremental learning | - |
dc.subject.keywordPlus | Catastrophic forgetting | - |
dc.subject.keywordPlus | Clustering approach | - |
dc.subject.keywordPlus | Deep architectures | - |
dc.subject.keywordPlus | Fine grained | - |
dc.subject.keywordPlus | Improve performance | - |
dc.subject.keywordPlus | Incremental learning | - |
dc.subject.keywordPlus | Memory consumption | - |
dc.subject.keywordPlus | Splitting strategies | - |
dc.subject.keywordPlus | Deep learning | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.