| _version_ |
1860799603339689984
|
| building |
INTELEK Repository
|
| collection |
Online Access
|
| collectionurl |
https://intelek.unisza.edu.my/intelek/pages/search.php?search=!collection407072
|
| date |
2016-05-03 11:51:21
|
| eventvenue |
Jeju Island, South Korea
|
| format |
Restricted Document
|
| id |
6648
|
| institution |
UniSZA
|
| originalfilename |
0134-01-FH03-FIK-16-05752.jpg
|
| person |
norman
|
| recordtype |
oai_dc
|
| resourceurl |
https://intelek.unisza.edu.my/intelek/pages/view.php?ref=6648
|
| spelling |
6648 https://intelek.unisza.edu.my/intelek/pages/view.php?ref=6648 https://intelek.unisza.edu.my/intelek/pages/search.php?search=!collection407072 Restricted Document Conference Conference Paper image/jpeg inches 96 96 norman 2016-05-03 11:51:21 771 1405x771 1405 29 29 0134-01-FH03-FIK-16-05752.jpg UniSZA Private Access A framework for multiprocessor neural networks systems Artificial neural networks (ANN) are able to simplify classification tasks and have been steadily improving both in accuracy and efficiency. However, there are several issues that need to be addressed when constructing an ANN for handling different scales of data, especially those with a low accuracy score. Parallelism is considered as a practical solution to solve a large workload. However, a comprehensive understanding is needed to generate a scalable neural network that is able to achieve the optimal training time for a large network. Therefore, this paper proposes several strategies, including neural ensemble techniques and parallel architecture, for distributing data to several network processor structures to reduce the time required for recognition tasks without compromising the achieved accuracy. The initial results indicate that the proposed strategies are able to improve the speed up performance for large scale neural networks while maintaining an acceptable accuracy. International Conference on ICT Convergence: "Global Open Innovation Summit for Smart ICT Convergence" Jeju Island, South Korea
|
| spellingShingle |
A framework for multiprocessor neural networks systems
|
| summary |
Artificial neural networks (ANN) are able to simplify classification tasks and have been steadily improving both in accuracy and efficiency. However, there are several issues that need to be addressed when constructing an ANN for handling different scales of data, especially those with a low accuracy score. Parallelism is considered as a practical solution to solve a large workload. However, a comprehensive understanding is needed to generate a scalable neural network that is able to achieve the optimal training time for a large network. Therefore, this paper proposes several strategies, including neural ensemble techniques and parallel architecture, for distributing data to several network processor structures to reduce the time required for recognition tasks without compromising the achieved accuracy. The initial results indicate that the proposed strategies are able to improve the speed up performance for large scale neural networks while maintaining an acceptable accuracy.
|
| title |
A framework for multiprocessor neural networks systems
|
| title_full |
A framework for multiprocessor neural networks systems
|
| title_fullStr |
A framework for multiprocessor neural networks systems
|
| title_full_unstemmed |
A framework for multiprocessor neural networks systems
|
| title_short |
A framework for multiprocessor neural networks systems
|
| title_sort |
framework for multiprocessor neural networks systems
|