Stochastic mirror descent method for distributed multi-agent optimization

© 2016 Springer-Verlag Berlin HeidelbergThis paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over th...

Full description

Bibliographic Details
Main Authors: Li, J., Li, G., Wu, Z., Wu, Changzhi
Format: Journal Article
Published: Springer Verlag 2016
Online Access:http://hdl.handle.net/20.500.11937/4712
_version_ 1848744592823812096
author Li, J.
Li, G.
Wu, Z.
Wu, Changzhi
author_facet Li, J.
Li, G.
Wu, Z.
Wu, Changzhi
author_sort Li, J.
building Curtin Institutional Repository
collection Online Access
description © 2016 Springer-Verlag Berlin HeidelbergThis paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.
first_indexed 2025-11-14T06:03:55Z
format Journal Article
id curtin-20.500.11937-4712
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T06:03:55Z
publishDate 2016
publisher Springer Verlag
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-47122017-09-13T14:44:36Z Stochastic mirror descent method for distributed multi-agent optimization Li, J. Li, G. Wu, Z. Wu, Changzhi © 2016 Springer-Verlag Berlin HeidelbergThis paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm. 2016 Journal Article http://hdl.handle.net/20.500.11937/4712 10.1007/s11590-016-1071-z Springer Verlag restricted
spellingShingle Li, J.
Li, G.
Wu, Z.
Wu, Changzhi
Stochastic mirror descent method for distributed multi-agent optimization
title Stochastic mirror descent method for distributed multi-agent optimization
title_full Stochastic mirror descent method for distributed multi-agent optimization
title_fullStr Stochastic mirror descent method for distributed multi-agent optimization
title_full_unstemmed Stochastic mirror descent method for distributed multi-agent optimization
title_short Stochastic mirror descent method for distributed multi-agent optimization
title_sort stochastic mirror descent method for distributed multi-agent optimization
url http://hdl.handle.net/20.500.11937/4712