Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Adaptive Weight Decay for Deep Neural Networks

Full metadata record
DC Field Value Language
dc.contributor.authorNakamura, Kensuke-
dc.contributor.authorHong, Byung-Woo-
dc.date.available2020-04-23T09:20:39Z-
dc.date.issued2019-08-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/39048-
dc.description.abstractRegularization in the optimization of deep neural networks is often critical to avoid undesirable over-fitting leading to better generalization of model. One of the most popular regularization algorithms is to impose L-2 penalty on the model parameters resulting in the decay of parameters, called weight-decay, and the decay rate is generally constant to all the model parameters in the course of optimization. In contrast to the previous approach based on the constant rate of weight-decay, we propose to consider the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, called adaptive weight-decay (AdaDecay) where the gradient norms are normalized within each layer and the degree of regularization for each parameter is determined in proportional to the magnitude of its gradient using the sigmoid function. We empirically demonstrate the effectiveness of AdaDecay in comparison to the state-of-the-art optimization algorithms using popular benchmark datasets: MNIST, Fashion-MNIST, and CIFAR-10 with conventional neural network models ranging from shallow to deep. The quantitative evaluation of our proposed algorithm indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.-
dc.format.extent9-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleAdaptive Weight Decay for Deep Neural Networks-
dc.typeArticle-
dc.identifier.doi10.1109/ACCESS.2019.2937139-
dc.identifier.bibliographicCitationIEEE ACCESS, v.7, pp 118857 - 118865-
dc.description.isOpenAccessY-
dc.identifier.wosid000484355600004-
dc.identifier.scopusid2-s2.0-85095375118-
dc.citation.endPage118865-
dc.citation.startPage118857-
dc.citation.titleIEEE ACCESS-
dc.citation.volume7-
dc.type.docTypeArticle-
dc.publisher.location미국-
dc.subject.keywordAuthorAdaptive regularization-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorneural networks-
dc.subject.keywordAuthorstochastic gradient descent-
dc.subject.keywordAuthorweight-decay-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Hong, Byung-Woo photo

Hong, Byung-Woo
소프트웨어대학 (AI학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE