Gradient-free method for nonsmooth distributed optimization

In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Fu...

Full description

Bibliographic Details
Main Authors: Li, J., Wu, Changzhi, Wu, Z., Long, Q.
Format: Journal Article
Published: Springer 2015
Subjects:
Online Access:http://hdl.handle.net/20.500.11937/35457
_version_ 1848754502139641856
author Li, J.
Wu, Changzhi
Wu, Z.
Long, Q.
author_facet Li, J.
Wu, Changzhi
Wu, Z.
Long, Q.
author_sort Li, J.
building Curtin Institutional Repository
collection Online Access
description In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
first_indexed 2025-11-14T08:41:25Z
format Journal Article
id curtin-20.500.11937-35457
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T08:41:25Z
publishDate 2015
publisher Springer
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-354572017-09-13T15:20:31Z Gradient-free method for nonsmooth distributed optimization Li, J. Wu, Changzhi Wu, Z. Long, Q. Convex optimization Distributed algorithm Gradient-free method Gaussian smoothing In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient. 2015 Journal Article http://hdl.handle.net/20.500.11937/35457 10.1007/s10898-014-0174-2 Springer restricted
spellingShingle Convex optimization
Distributed algorithm
Gradient-free method
Gaussian smoothing
Li, J.
Wu, Changzhi
Wu, Z.
Long, Q.
Gradient-free method for nonsmooth distributed optimization
title Gradient-free method for nonsmooth distributed optimization
title_full Gradient-free method for nonsmooth distributed optimization
title_fullStr Gradient-free method for nonsmooth distributed optimization
title_full_unstemmed Gradient-free method for nonsmooth distributed optimization
title_short Gradient-free method for nonsmooth distributed optimization
title_sort gradient-free method for nonsmooth distributed optimization
topic Convex optimization
Distributed algorithm
Gradient-free method
Gaussian smoothing
url http://hdl.handle.net/20.500.11937/35457