Detailed Information

Cited 0 time in webofscience Cited 2 time in scopus
Metadata Downloads

ALADDIN: Asymmetric Centralized Training for Distributed Deep Learning

Full metadata record
DC Field Value Language
dc.contributor.authorKo, Yunyong-
dc.contributor.authorChoi, Kibong-
dc.contributor.authorJei, Hyunseung-
dc.contributor.authorLee, Dongwon-
dc.contributor.authorKim, Sang-Wook-
dc.date.accessioned2022-07-06T11:57:16Z-
dc.date.available2022-07-06T11:57:16Z-
dc.date.created2021-12-08-
dc.date.issued2021-10-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/140699-
dc.description.abstractTo speed up the training of massive deep neural network (DNN) models, distributed training has been widely studied. In general, a centralized training, a type of distributed training, suffers from the communication bottleneck between a parameter server (PS) and workers. On the other hand, a decentralized training suffers from increased parameter variance among workers that causes slower model convergence. Addressing this dilemma, in this work, we propose a novel centralized training algorithm, ALADDIN, employing asymmetriccommunication between PS and workers for the PS bottleneck problem and novel updating strategies for both local and global parameters to mitigate the increased variance problem. Through a convergence analysis, we show that the convergence rate of ALADDIN is O(1 ønk ) on the non-convex problem, where n is the number of workers and k is the number of training iterations. The empirical evaluation using ResNet-50 and VGG-16 models demonstrates that (1) ALADDIN shows significantly better training throughput with up to 191% and 34% improvement compared to a synchronous algorithm and the state-of-the-art decentralized algorithm, respectively, (2) models trained by ALADDIN converge to the accuracies, comparable to those of the synchronous algorithm, within the shortest time, and (3) the convergence of ALADDIN is robust under various heterogeneous environments.-
dc.language영어-
dc.language.isoen-
dc.publisherAssociation for Computing Machinery-
dc.titleALADDIN: Asymmetric Centralized Training for Distributed Deep Learning-
dc.typeArticle-
dc.contributor.affiliatedAuthorKim, Sang-Wook-
dc.identifier.doi10.1145/3459637.3482412-
dc.identifier.scopusid2-s2.0-85119205605-
dc.identifier.bibliographicCitationInternational Conference on Information and Knowledge Management, Proceedings, pp.863 - 872-
dc.relation.isPartOfInternational Conference on Information and Knowledge Management, Proceedings-
dc.citation.titleInternational Conference on Information and Knowledge Management, Proceedings-
dc.citation.startPage863-
dc.citation.endPage872-
dc.type.rimsART-
dc.type.docTypeConference Paper-
dc.description.journalClass1-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordAuthorcentralized training-
dc.subject.keywordAuthordistributed deep learning-
dc.subject.keywordAuthorheterogeneous systems-
Files in This Item
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Sang-Wook photo

Kim, Sang-Wook
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE