Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

An In-Depth Analysis of Distributed Training of Deep Neural Networks

Authors
Ko, YunyongChoi, KibongSeo, JiwonKim, Sangwook
Issue Date
May-2021
Publisher
IEEE
Keywords
deep learning; distributed training algorithm
Citation
Proceedings - 2021 IEEE 35th International Parallel and Distributed Processing Symposium, IPDPS 2021, pp.994 - 1003
Indexed
SCOPUS
Journal Title
Proceedings - 2021 IEEE 35th International Parallel and Distributed Processing Symposium, IPDPS 2021
Start Page
994
End Page
1003
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/141886
DOI
10.1109/IPDPS49936.2021.00108
ISSN
0000-0000
Abstract
As the popularity of deep learning in industry rapidly grows, efficient training of deep neural networks (DNNs) becomes important. To train a DNN with a large amount of data, distributed training with data parallelism has been widely adopted. However, the communication overhead limits the scalability of distributed training. To reduce the overhead, a number of distributed training algorithms have been proposed. The model accuracy and training performance of those algorithms can be different depending on various factors such as cluster settings, training models/datasets, and optimization techniques applied. In order for someone to adopt a distributed training algorithm appropriate for her/his situation, it is required for her/him to fully understand the model accuracy and training performance of these algorithms in various settings. Toward this end, this paper reviews and evaluates seven popular distributed training algorithms (BSP, ASP, SSP, EASGD, AR-SGD, GoSGD, and AD-PSGD) in terms of the model accuracy and training performance in various settings. Specifically, we evaluate those algorithms for two CNN models, in different cluster settings, and with three well-known optimization techniques. Through extensive evaluation and analysis, we made several interesting discoveries. For example, we found out that some distributed training algorithms (SSP, EASGD, and GoSGD) have highly negative impact on the model accuracy because they adopt intermittent and asymmetric communication to improve training performance; the communication overhead of some centralized algorithms (ASP and SSP) is much higher than we expected in a cluster setting with limited network bandwidth because of the PS bottleneck problem. These findings, and many more in the paper, can guide the adoption of proper distributed training algorithms in industry; our findings can be useful in academia as well for designing new distributed training algorithms.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Seo, Ji won photo

Seo, Ji won
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE