Robustness of convergence proofs in numerical methods in unconstrained optimization

An iterative method to compute the minimum point in an unconstrained optimization problem can be viewed as a control system. Thus to achieve robust solutions it is desirable to have feedback solution rather than open loop control policies. A typical proof of a numerical method in optimization exam...

Full description

Bibliographic Details
Main Authors: Goh, B., Leong, W., Teo, Kok Lay
Other Authors: Honlei Xu
Format: Book Chapter
Published: Springer 2014
Online Access:http://hdl.handle.net/20.500.11937/63332
http://dx.doi.org/10.1007/978-94-017-8044-5_1
_version_ 1848761059729473536
author Goh, B.
Leong, W.
Teo, Kok Lay
author2 Honlei Xu
author_facet Honlei Xu
Goh, B.
Leong, W.
Teo, Kok Lay
author_sort Goh, B.
building Curtin Institutional Repository
collection Online Access
description An iterative method to compute the minimum point in an unconstrained optimization problem can be viewed as a control system. Thus to achieve robust solutions it is desirable to have feedback solution rather than open loop control policies. A typical proof of a numerical method in optimization examines what happens along the total path of a trajectory for all admissible initial values. Thus, it is an open loop type of analysis. On the other hand, a proof of convergence of a numerical method by Lyapunov theorem in an unconstrained optimization problem examines what happens to changes in the value of the objective function relative to the level sets of the function in a typical iteration and it is re-started with numerical errors of the state variable. This is an example of feedback type control analysis and thus it is robust to numerical errors in the computation of the current position. We shall draw on an example due to Barbashin and Krasovskii and use Lyapunov function theory to illustrate the differences between open loop and closed loop convergence analysis of a numerical method in unconstrained optimization. It will also be demonstrated that open loop type of convergence along each trajectory for all possible initial conditions may not guarantee convergence to a global mini- mum point. It only establishes convergence to stationary points. What is needed is the concept of properly nested level sets of the objective function which is a key requirement for global convergence in a proof by using Lyapunov function theorem. Globally, an objective function has properly nested level sets if all the level sets are topologically equivalent to concentric spherical surfaces. For convenience, brief reviews of Lyapunov function theorem for the global con- vergence of an iterative system and the Zoutendijk theorem for the convergence of a line search method in optimization will be given
first_indexed 2025-11-14T10:25:39Z
format Book Chapter
id curtin-20.500.11937-63332
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T10:25:39Z
publishDate 2014
publisher Springer
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-633322023-02-27T07:34:27Z Robustness of convergence proofs in numerical methods in unconstrained optimization Goh, B. Leong, W. Teo, Kok Lay Honlei Xu Xiangyu Wang An iterative method to compute the minimum point in an unconstrained optimization problem can be viewed as a control system. Thus to achieve robust solutions it is desirable to have feedback solution rather than open loop control policies. A typical proof of a numerical method in optimization examines what happens along the total path of a trajectory for all admissible initial values. Thus, it is an open loop type of analysis. On the other hand, a proof of convergence of a numerical method by Lyapunov theorem in an unconstrained optimization problem examines what happens to changes in the value of the objective function relative to the level sets of the function in a typical iteration and it is re-started with numerical errors of the state variable. This is an example of feedback type control analysis and thus it is robust to numerical errors in the computation of the current position. We shall draw on an example due to Barbashin and Krasovskii and use Lyapunov function theory to illustrate the differences between open loop and closed loop convergence analysis of a numerical method in unconstrained optimization. It will also be demonstrated that open loop type of convergence along each trajectory for all possible initial conditions may not guarantee convergence to a global mini- mum point. It only establishes convergence to stationary points. What is needed is the concept of properly nested level sets of the objective function which is a key requirement for global convergence in a proof by using Lyapunov function theorem. Globally, an objective function has properly nested level sets if all the level sets are topologically equivalent to concentric spherical surfaces. For convenience, brief reviews of Lyapunov function theorem for the global con- vergence of an iterative system and the Zoutendijk theorem for the convergence of a line search method in optimization will be given 2014 Book Chapter http://hdl.handle.net/20.500.11937/63332 10.1007/978-94-017-8044-5_1 http://dx.doi.org/10.1007/978-94-017-8044-5_1 Springer restricted
spellingShingle Goh, B.
Leong, W.
Teo, Kok Lay
Robustness of convergence proofs in numerical methods in unconstrained optimization
title Robustness of convergence proofs in numerical methods in unconstrained optimization
title_full Robustness of convergence proofs in numerical methods in unconstrained optimization
title_fullStr Robustness of convergence proofs in numerical methods in unconstrained optimization
title_full_unstemmed Robustness of convergence proofs in numerical methods in unconstrained optimization
title_short Robustness of convergence proofs in numerical methods in unconstrained optimization
title_sort robustness of convergence proofs in numerical methods in unconstrained optimization
url http://hdl.handle.net/20.500.11937/63332
http://dx.doi.org/10.1007/978-94-017-8044-5_1