Adaptive Second-order Derivative Approximate Greatest Descent Optimization for Deep Learning Neural Networks

Backpropagation using Stochastic Diagonal Approximate Greatest Descent (SDAGD) is a novel adaptive second-order derivative optimization method in updating weights of deep learning neural networks. SDAGD applies two-phase switching strategy to seek for solution at far using long-term optimal trajecto...

Full description

Bibliographic Details
Main Author: Tan, Hong Hui
Format: Thesis
Published: Curtin University 2019
Online Access:http://hdl.handle.net/20.500.11937/77991