A faster learning neural network classifier using selective backpropagation

The problem of saturation in neural network classification problems is discussed. The listprop algorithm is presented which reduces saturation and dramatically increases the rate of convergence. The technique uses selective application of the backpropagation algorithm, such that training is only...

Full description

Bibliographic Details
Main Author: Craven, Michael P.
Format: Conference or Workshop Item
Published: 1997
Subjects:
Online Access:https://eprints.nottingham.ac.uk/1901/
Description
Summary:The problem of saturation in neural network classification problems is discussed. The listprop algorithm is presented which reduces saturation and dramatically increases the rate of convergence. The technique uses selective application of the backpropagation algorithm, such that training is only carried out for patterns which have not yet been learnt to a desired output activation tolerance. Furthermore, in the output layer, training is only carried out for weights connected to those output neurons in the output vector which are still in error, which further reduces neuron saturation and learning time. Results are presented for a 196-100-46 Multi-Layer Perceptron (MLP) neural network used for text-to-speech conversion, which show that convergence is achieved for up to 99.7% of the training set compared to at best 94.8% for standard backpropagation. Convergence is achieved in 38% of the time taken by the standard algorithm.