Optimization Methods In Training Neural Networks

Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan...

Full description

Bibliographic Details
Main Author: Sathasivam, Saratha
Format: Thesis
Language:English
Published: 2003
Subjects:
Online Access:http://eprints.usm.my/31158/
http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf
_version_ 1848876493800734720
author Sathasivam, Saratha
author_facet Sathasivam, Saratha
author_sort Sathasivam, Saratha
building USM Institutional Repository
collection Online Access
description Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat penamatan kuadratik lebih diutamakan. There are a number of extremizing techniques to solve linear and nonlinear algebraic • problems. Newton's method has a property called quadratic termination~ which means that it minimizes a quadratic function exactly in a finite number of iterations. Unfortunately, it requires calculation and storage of the second derivatives of the quadratic function involved. When the number of parameters, n, is large, it may be impractical to compute all the second derivatives. This is especially true for neural networks, where practical applications can require several hundred to many thousands weights. Eor these particular cases, methods that require ,only first derivatives bMt still have quadratic termination are preferred.
first_indexed 2025-11-15T17:00:26Z
format Thesis
id usm-31158
institution Universiti Sains Malaysia
institution_category Local University
language English
last_indexed 2025-11-15T17:00:26Z
publishDate 2003
recordtype eprints
repository_type Digital Repository
spelling usm-311582017-07-27T04:46:59Z http://eprints.usm.my/31158/ Optimization Methods In Training Neural Networks Sathasivam, Saratha QA1 Mathematics (General) Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat penamatan kuadratik lebih diutamakan. There are a number of extremizing techniques to solve linear and nonlinear algebraic • problems. Newton's method has a property called quadratic termination~ which means that it minimizes a quadratic function exactly in a finite number of iterations. Unfortunately, it requires calculation and storage of the second derivatives of the quadratic function involved. When the number of parameters, n, is large, it may be impractical to compute all the second derivatives. This is especially true for neural networks, where practical applications can require several hundred to many thousands weights. Eor these particular cases, methods that require ,only first derivatives bMt still have quadratic termination are preferred. 2003-07 Thesis NonPeerReviewed application/pdf en http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf Sathasivam, Saratha (2003) Optimization Methods In Training Neural Networks. Masters thesis, Universiti Sains Malaysia.
spellingShingle QA1 Mathematics (General)
Sathasivam, Saratha
Optimization Methods In Training Neural Networks
title Optimization Methods In Training Neural Networks
title_full Optimization Methods In Training Neural Networks
title_fullStr Optimization Methods In Training Neural Networks
title_full_unstemmed Optimization Methods In Training Neural Networks
title_short Optimization Methods In Training Neural Networks
title_sort optimization methods in training neural networks
topic QA1 Mathematics (General)
url http://eprints.usm.my/31158/
http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf