Two-stage architectural fine-tuning for neural architecture search in efficient transfer learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Soohyun | - |
dc.contributor.author | Son, Seok Bin | - |
dc.contributor.author | Lee, Youn Kyu | - |
dc.contributor.author | Jung, Soyi | - |
dc.contributor.author | Kim, Joongheon | - |
dc.date.accessioned | 2024-01-29T05:00:28Z | - |
dc.date.available | 2024-01-29T05:00:28Z | - |
dc.date.issued | 2023-12 | - |
dc.identifier.issn | 0013-5194 | - |
dc.identifier.issn | 1350-911X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/32603 | - |
dc.description.abstract | In many deep neural network (DNN) applications, the difficulty of gathering high-quality data in industry fields hinders the practical use of DNN. Thus, the concept of transfer learning (TL) has emerged, which leverages the pretrained knowledge of the DNN which was built based on large-scale datasets. For this TL objective, this paper suggests two-stage architectural fine-tuning for reducing the costs and time while exploring the most efficient DNN model, inspired by neural architecture search (NAS). The first stage is mutation, which reduces the search costs using a priori architectural information. Moreover, the next stage is early-stopping, which reduces NAS costs by terminating the search process in the middle of computation. The data-intensive experimental results verify that the proposed method outperforms benchmarks. This paper suggests two-stage architectural fine-tuning for reducing the costs and time while exploring the most efficient neural network model, inspired by neural architecture search (NAS). The first stage is mutation, which reduces the search costs using a priori architectural information. Moreover, the next stage is early-stopping, which reduces NAS costs by terminating the search process in the middle of computation.image | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | WILEY | - |
dc.title | Two-stage architectural fine-tuning for neural architecture search in efficient transfer learning | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1049/ell2.13066 | - |
dc.identifier.scopusid | 2-s2.0-85180133284 | - |
dc.identifier.wosid | 001128010300001 | - |
dc.identifier.bibliographicCitation | ELECTRONICS LETTERS, v.59, no.24 | - |
dc.citation.title | ELECTRONICS LETTERS | - |
dc.citation.volume | 59 | - |
dc.citation.number | 24 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordAuthor | image processing | - |
dc.subject.keywordAuthor | neural nets | - |
dc.subject.keywordAuthor | neural net architecture | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea02-320-1314
COPYRIGHT 2020 HONGIK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.