Distributed proximal-gradient methods for convex optimization with inequality constraints
We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penal...
| Main Authors: | , , , , |
|---|---|
| Format: | Journal Article |
| Published: |
Australian Mathematical Society
2014
|
| Subjects: | |
| Online Access: | http://hdl.handle.net/20.500.11937/41558 |
| _version_ | 1848756178681593856 |
|---|---|
| author | Li, J. Wu, Changzhi Wu, Z. Long, Q. Wang, Xiangyu |
| author_facet | Li, J. Wu, Changzhi Wu, Z. Long, Q. Wang, Xiangyu |
| author_sort | Li, J. |
| building | Curtin Institutional Repository |
| collection | Online Access |
| description | We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal–dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O(1/k) , after k iterations, which is faster than the rate, O(1/ sqrt k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. |
| first_indexed | 2025-11-14T09:08:04Z |
| format | Journal Article |
| id | curtin-20.500.11937-41558 |
| institution | Curtin University Malaysia |
| institution_category | Local University |
| last_indexed | 2025-11-14T09:08:04Z |
| publishDate | 2014 |
| publisher | Australian Mathematical Society |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | curtin-20.500.11937-415582017-09-13T14:15:44Z Distributed proximal-gradient methods for convex optimization with inequality constraints Li, J. Wu, Changzhi Wu, Z. Long, Q. Wang, Xiangyu proximal-gradient method convex optimization exact penalty function method distributed algorithm We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal–dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O(1/k) , after k iterations, which is faster than the rate, O(1/ sqrt k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. 2014 Journal Article http://hdl.handle.net/20.500.11937/41558 10.1017/S1446181114000273 Australian Mathematical Society restricted |
| spellingShingle | proximal-gradient method convex optimization exact penalty function method distributed algorithm Li, J. Wu, Changzhi Wu, Z. Long, Q. Wang, Xiangyu Distributed proximal-gradient methods for convex optimization with inequality constraints |
| title | Distributed proximal-gradient methods for convex optimization with inequality constraints |
| title_full | Distributed proximal-gradient methods for convex optimization with inequality constraints |
| title_fullStr | Distributed proximal-gradient methods for convex optimization with inequality constraints |
| title_full_unstemmed | Distributed proximal-gradient methods for convex optimization with inequality constraints |
| title_short | Distributed proximal-gradient methods for convex optimization with inequality constraints |
| title_sort | distributed proximal-gradient methods for convex optimization with inequality constraints |
| topic | proximal-gradient method convex optimization exact penalty function method distributed algorithm |
| url | http://hdl.handle.net/20.500.11937/41558 |