Convergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems
- Authors
- Choi, Woocheol; Kim, Doheon; Yun, Seok-Bae
- Issue Date
- Oct-2022
- Publisher
- Kluwer Academic/Plenum Publishers
- Keywords
- Distributed gradient methods; NEAR-DGD(+); Quasi-strong convexity
- Citation
- Journal of Optimization Theory and Applications, v.195, no.1, pp 172 - 204
- Pages
- 33
- Indexed
- SCIE
SCOPUS
- Journal Title
- Journal of Optimization Theory and Applications
- Volume
- 195
- Number
- 1
- Start Page
- 172
- End Page
- 204
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/112777
- DOI
- 10.1007/s10957-022-02069-0
- ISSN
- 0022-3239
1573-2878
- Abstract
- We are concerned with the convergence of NEAR-DGD(+) (Nested Exact Alternating Recursion Distributed Gradient Descent) method introduced to solve the distributed optimization problems. Under the assumption of the strong convexity of local objective functions and the Lipschitz continuity of their gradients, the linear convergence is established in Berahas et al. (IEEE Trans Autom Control 64:3141-3155, 2019). In this paper, we investigate the convergence property of NEAR-DGD(+) in the absence of strong convexity. More precisely, we establish the convergence results in the following two cases: (1) When only the convexity is assumed on the objective function. (2) When the objective function is represented as a composite function of a strongly convex function and a rank deficient matrix, which falls into the class of convex and quasi-strongly convex functions. The numerical results are provided to support the convergence results.
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF SCIENCE AND CONVERGENCE TECHNOLOGY > ERICA 수리데이터사이언스학과 > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.