Detailed Information

Cited 2 time in webofscience Cited 3 time in scopus
Metadata Downloads

Soft Memory Box: A Virtual Shared Memory Framework for Fast Deep Neural Network Training in Distributed High Performance Computing

Full metadata record
DC Field Value Language
dc.contributor.authorAhn, Shinyoung-
dc.contributor.authorKim, Joongheon-
dc.contributor.authorLim, Eunji-
dc.contributor.authorKang, Sungwon-
dc.date.available2019-01-22T14:20:13Z-
dc.date.issued2018-05-08-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/1495-
dc.description.abstractDeep learning is one of the major promising machine learning methodologies. Deep learning is widely used in various application domains, e.g., image recognition, voice recognition, and natural language processing. In order to improve learning accuracy, deep neural networks have evolved by: 1) increasing the number of layers and 2) increasing the number of parameters in massive models. This implies that distributed deep learning platforms need to evolve to: 1) deal with huge/complex deep neural networks and 2) process with high-performance computing resources for massive training data. This paper proposes a new virtual shared memory framework, called Soft Memory Box (SMB), which enables sharing the memory of remote node among distributed processes in the nodes so as to improve communication performance via parameter sharing. According to data-intensive performance evaluation results, the communication time of deep learning using the proposed SMB is 2.1 times faster than that using the massage passing interface (MPI). In addition, the communication time of the SMB-based asynchronous parameter update becomes 2-7 times faster than that using the MPI depending on deep learning models and the number of deep learning workers.-
dc.format.extent12-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleSoft Memory Box: A Virtual Shared Memory Framework for Fast Deep Neural Network Training in Distributed High Performance Computing-
dc.typeArticle-
dc.identifier.doi10.1109/ACCESS.2018.2834146-
dc.identifier.bibliographicCitationIEEE ACCESS, v.6, pp 26493 - 26504-
dc.description.isOpenAccessN-
dc.identifier.wosid000434945000001-
dc.identifier.scopusid2-s2.0-85046765445-
dc.citation.endPage26504-
dc.citation.startPage26493-
dc.citation.titleIEEE ACCESS-
dc.citation.volume6-
dc.type.docTypeArticle-
dc.publisher.location미국-
dc.subject.keywordAuthorHigh performance computing-
dc.subject.keywordAuthordistributed computing-
dc.subject.keywordAuthorsoft memory box-
dc.subject.keywordAuthorshared memory-
dc.subject.keywordAuthordeep neural network-
dc.subject.keywordAuthordistributed deep learning-
dc.subject.keywordPlusRECOGNITION-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > School of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE