Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Convergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems

Full metadata record
DC Field Value Language
dc.contributor.authorChoi, Woocheol-
dc.contributor.authorKim, Doheon-
dc.contributor.authorYun, Seok-Bae-
dc.date.accessioned2023-05-03T09:47:49Z-
dc.date.available2023-05-03T09:47:49Z-
dc.date.issued2022-10-
dc.identifier.issn0022-3239-
dc.identifier.issn1573-2878-
dc.identifier.urihttps://scholarworks.bwise.kr/erica/handle/2021.sw.erica/112777-
dc.description.abstractWe are concerned with the convergence of NEAR-DGD(+) (Nested Exact Alternating Recursion Distributed Gradient Descent) method introduced to solve the distributed optimization problems. Under the assumption of the strong convexity of local objective functions and the Lipschitz continuity of their gradients, the linear convergence is established in Berahas et al. (IEEE Trans Autom Control 64:3141-3155, 2019). In this paper, we investigate the convergence property of NEAR-DGD(+) in the absence of strong convexity. More precisely, we establish the convergence results in the following two cases: (1) When only the convexity is assumed on the objective function. (2) When the objective function is represented as a composite function of a strongly convex function and a rank deficient matrix, which falls into the class of convex and quasi-strongly convex functions. The numerical results are provided to support the convergence results.-
dc.format.extent33-
dc.language영어-
dc.language.isoENG-
dc.publisherKluwer Academic/Plenum Publishers-
dc.titleConvergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1007/s10957-022-02069-0-
dc.identifier.scopusid2-s2.0-85136602945-
dc.identifier.wosid000843263700001-
dc.identifier.bibliographicCitationJournal of Optimization Theory and Applications, v.195, no.1, pp 172 - 204-
dc.citation.titleJournal of Optimization Theory and Applications-
dc.citation.volume195-
dc.citation.number1-
dc.citation.startPage172-
dc.citation.endPage204-
dc.type.docTypeArticle-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaOperations Research & Management Science-
dc.relation.journalResearchAreaMathematics-
dc.relation.journalWebOfScienceCategoryOperations Research & Management Science-
dc.relation.journalWebOfScienceCategoryMathematics, Applied-
dc.subject.keywordPlusDISTRIBUTED OPTIMIZATION-
dc.subject.keywordPlusCONSENSUS-
dc.subject.keywordPlusALGORITHMS-
dc.subject.keywordPlusNETWORKS-
dc.subject.keywordAuthorDistributed gradient methods-
dc.subject.keywordAuthorNEAR-DGD(+)-
dc.subject.keywordAuthorQuasi-strong convexity-
dc.identifier.urlhttps://link.springer.com/article/10.1007/s10957-022-02069-0-
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF SCIENCE AND CONVERGENCE TECHNOLOGY > ERICA 수리데이터사이언스학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Doheon photo

Kim, Doheon
ERICA 소프트웨어융합대학 (ERICA 수리데이터사이언스학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE