Adaptive Second-order Derivative Approximate Greatest Descent Optimization for Deep Learning Neural Networks

Backpropagation using Stochastic Diagonal Approximate Greatest Descent (SDAGD) is a novel adaptive second-order derivative optimization method in updating weights of deep learning neural networks. SDAGD applies two-phase switching strategy to seek for solution at far using long-term optimal trajecto...

Full description

Bibliographic Details
Main Author: Tan, Hong Hui
Format: Thesis
Published: Curtin University 2019
Online Access:http://hdl.handle.net/20.500.11937/77991
Description
Summary:Backpropagation using Stochastic Diagonal Approximate Greatest Descent (SDAGD) is a novel adaptive second-order derivative optimization method in updating weights of deep learning neural networks. SDAGD applies two-phase switching strategy to seek for solution at far using long-term optimal trajectory and automatically switch to Newton method when nearer to optimal solution. SDAGD has the advantages of steepest training roll-off rate, adaptive adjustment of step-length and the ability to deal with vanishing gradient issues in deep architecture.