| _version_ |
1860797398580723712
|
| building |
INTELEK Repository
|
| collection |
Online Access
|
| collectionurl |
https://intelek.unisza.edu.my/intelek/pages/search.php?search=!collection407072
|
| date |
2015-12-02 12:33:34
|
| format |
Restricted Document
|
| id |
12556
|
| institution |
UniSZA
|
| internalnotes |
[1] Zhao Hui. (2013). “Intrusion Detection Ensemble Algorithm Based on Bagging and Neighborhood Rough Set”, International Journal of Security and Its Applications (IJSIA), Vol.7, No.5, pp. 193-204, SERSC. [2] Chen Tao. (2011), “Selective SVM Ensemble Based on Accelerating Genetic Algorithm”, Application Research of Computers, Issue 1, pp. 139-141, Ori Probe Information Service. [3] Zhou, Z. H., & Tang, W. (2003). “Selective Ensemble of Decision Trees”, In Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Lecture Notes in Computer Science Volume 2639, pp. 476-483, Springer Berlin Heidelberg. [4] Dietterich T., (2000). “Ensemble Methods in Machine Learning”, In J. Kittler and F.Roll, editors, First International Workshop on Multiple Classifier Systems, Lecture Notes in Computer Science, pp. 1-15. Springer-Verlag. [5] Ricardo Gutierrez-Osuna () “L25: Ensemble Learning”, CSCE 666 Pattern Analysis CSE@TAMU” Lecture Notes pp. 1 – 15, Texas A&M University. Available online at: http://search.kedirijaya.com/detail/research.cs.t amu.edu/prism/lectures/pr/pr_l25.pdf. Accessed on December, 15,2014. [6] Polikar, R. (2012). “Ensemble learning”, In C. Zhang and Y. Ma (eds.), Ensemble Machine Learning: Methods and Applications, Springer Science + Business Media, LLC 2012, pp. 1-34, Springer US. [7] Santana, L. E. A., Silva, L., Canuto, A. M., Pintro, F., & Vale, K. O. (2010). “A Comparative Analysis of Genetic Algorithm and Ant Colony Optimization to Select Attributes for an Heterogeneous Ensemble of Classifiers”, In Evolutionary Computation (CEC), 2010 IEEE Congress on pp. 1-8, IEEE. [8] Neto, A. A. F., & Canuto, A. M. (2014). “Meta-Learning and Multi-Objective Optimization to Design Ensemble of Classifiers”, 2014 Brazilian Conference on Intelligent Systems. pp. 91 – 96, IEEE. [9] R. E. Banfield, L. O. Hall, K. W. Bowyer and W. P. Kegelmeyer (2002). “Ensemble Diversity Measures and Their Application to Thinning”, Information Fusion, vol. 6, no. 1, pp. 49-62, Elsevier B.V. [10] Quinlan, J. R., (1996). “Bagging, Boosting, and C4.5”, In Proceedings of the Thirteenth National Conference on Artificial Intelligence, Vol. 1. AAAI Press, pp. 725-730, ACM Digital Library. [11] Opitz, D. and Maclin, R., (1999). “Popular Ensemble Methods: An Empirical Study”, Journal Of Artificial Intelligence Research, Volume 11, pages 169-198, Cornel University Library. [12] Kuncheva, L. I., & Whitaker, C. J. (2003). “Measures of Diversity in Classifier Ensembles and Their Relationships with the Ensemble Accuracy”, Machine learning, Volume 51, Issue 2, pp.181-207, Kluwer Academic Publisher. [13] Wang, W. (2010). “Heterogeneous Bayesian Ensembles for Classifying Spam Emails”, The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, IEEE. [14] Wang, W. (2008). “Some Fundamental Issues in Ensemble Methods”, In Neural Networks, IJCNN. IEEE World Congress on Computational Intelligence. IEEE International Joint Conference on pp. 2243-2250. IEEE. [15] Wang, S., & Yao, X. (2013). “Relationships Between Diversity of Classification Ensembles and Single-Class Performance Measures”, Knowledge and Data Engineering, IEEE Transactions on, vol. 25, No. 1, pp. 206- 219. IEEE. [16] L. I. Kuncheva, (2005). "Combining Pattern Classifiers: Methods and Algorithms", New York: Wiley. [17] K. Tang, P.N. Suganthan, and X. Yao, (2006). “An Analysis of Diversity Measures,” Machine Learning, vol. 65, pp. 247-271, Springer. [18] K. Ghosh, Y.S. Ng, and R. Srinivasan, (2011). "Evaluation of Decision Fusion Strategies for Effective Collaboration among Heterogeneous Fault Diagnostic Methods", Computers and Chemical Engineering, Volume 35, Issue 2, 9 February 2011, Pages 342–355, Elsevier. [19] Woods, K., Kegelmeyer, W. P., & Bowyer, K. (1997). “Combination of Multiple Classifiers Using Local Accuracy Estimates”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, pp. 405-410. IEEE. [20] Shruti Asmita and K k Shukla. (2014). “Review on the Architecture, Algorithm and Fusion Strategies in Ensemble Learning”, International Journal of Computer Applications, Volume 108, Number 8, pp. 21-28. [21] Kuncheva, L. I. (2004). “Combining Pattern Classifier”, Methods and Algorithms. John Wiley & Sons. [22] Shafer, G. (1976). “A Mathematical Theory of Evidence”, Vol. 1, Princeton: Princeton University Press. [23] G. Brown, J.L. Wyatt, and P. Tino, (2005). “Managing Diversity in Regression Ensembles,” The Journal of Machine Learning Research, vol. 6, pp. 1621-1650, ACM Digital Library. [24] Makhtar, M., Yang, L., Neagu, D., & Ridley, M. (2012). “Optimisation of Classifier Ensemble for Predictive Toxicology Applications”, In Computer Modelling and Simulation (UKSim), 2012 UKSim 14th International Conference on pp. 236-241. IEEE. [25] J. Han, M. Kamber, and J. Pei, (2006). “Data Mining: Concepts and Techniques”, 2nd edition, Morgan Kaufmann. [26] S. Kotsiantis, D. Kanellopoulos, and P. Pintelas, (2006) “Data Preprocessing for Supervised Leaning”, International Journal of Computer Science, vol.1, no.2, pp.111–117, World Enformatika Society. [27] Tebbutt TH (1983). “Principles of Water Quality Control”. 3rd Edn. pp. 42. Pergaman Press Oxford. [28] J. Das and B. C. Acharya, (2003). “Hydrology and Assessment of Lotic Water Quality in Cuttack City, India,” Water, Air, and Soil Pollution, vol. 150, no. 1–4, pp. 163–175, Springer. [29] Optimization Models https://inst.eecs.berkeley.edu/~ee127a/book/log in/l_intro_main.html , (visited on 10 January, 2015) [30] Gupta, A., & Thakkar, A. R. (2014). Optimization of Stacking Ensemble Configuration Based on Various Meta heuristic Algorithms. In Advance Computing Conference (IACC), IEEE International pp. 444-451, IEEE. [31] Anwar, H., Qamar, U., & Muzaffar Qureshi, A. W. (2014). Global Optimization Ensemble Model for Classification Methods. The Scientific World Journal, Hindawi Publishing Corporation [32] P. Kraipeerapun, S. Amornsamankul, “Prediction of WQI for Tha Chin River Using an Ensemble of Support Vector Regression and Complementary Neural Networks”, Recent Advances in Information Science, Proceedings of the 7th European Computing Conference (ECC '13), pp. 36 – 41. [33] Charkhabi, M., Dhot, T., & Mojarad, S. A. (2014). “Cluster Ensembles, Majority Vote, Voter Eligibility and Privileged Voters”, International Journal of Machine Learning & Computing, Vol. 4, No.3, pp. 275 – 278. [34] Wahid, A., Gao, X., & Andreae, P. (2014). “Multi-View Clustering of Web Documents using Multi-Objective Genetic Algorithm”, In Evolutionary Computation (CEC), 2014 IEEE Congress, pp. 2625-2632. IEEE. [35] Tao Chen A, (2014). “Selective Ensemble Classification Method on Microarray Data”,Journal of Chemical and Pharmaceutical Research, JCPRC5, vol. 6(6), pp. 2860-2866, CODEN(USA). [36] Anifowose, F., Labadin, J., & Abdulraheem, A. (2013). “Predicting Petroleum Reservoir Properties from Downhole Sensor Data using an Ensemble Model of Neural Networks”, In Proceedings of Workshop on Machine Learning for Sensory Data Analysis, pp. 27 - 34. ACM. [37] Wei, S., Cheng, L., Huang, W., & Gu, H. (2014). “A New Approximate Gradient Algorithm Applied in Constrained Reservoir Production Optimization”, Journal of Industrial and Intelligent Information Vol. 2, No.3, pp. 194 – 199, Engineering and Technology Publishing. [38] Rahman, A., D'Este, C., & McCulloch, J. (2013). “Ensemble Feature Ranking for Shellfish Farm Closure Cause Identification”, In Proceedings of Workshop on Machine Learning for Sensory Data Analysis pp. 13 – 18, ACM Digital Library. [39] Lacoste, A., Larochelle, H., Laviolette, F., & Marchand, M. (2014). Sequential Model-Based Ensemble Optimization. arXiv preprint arXiv:1402.0796. [40] Zeng, B., Luo, Z., & Wei, J. (2008). “Sea Water Pollution Assessment Based on Ensemble of Classifiers”, In Natural Computation, 2008. ICNC'08. Fourth International Conference on Vol. 1, pp. 241- 245. IEEE. [41] Makhtar, M., Neagu, D. C., & Ridley, M. J. (2011). “Comparing Multi-Class Classifiers: on the Similarity of Confusion Matrices for Predictive Toxicology Applications”, In Intelligent Data Engineering and Automated Learning Ideal 2011, pp. 252-261. Springer Berlin Heidelberg. [42] Anifowose, F., Labadin, J., & Abdulraheem, A. (2013). “Ensemble Model of Artificial Neural Networks with Randomized Number of Hidden Neurons”. In Information Technology in Asia (CITA), 2013 8th International Conference on pp. 1-5. IEEE. [43] Bharathidason, S., & Jothi Venkataeswaran, C. (2014). “Improving Classification Accuracy based on Random Forest Model with Uncorrelated High Performing Trees”, International Journal of Computer Applications, vol. 101, issue 13, pp. 26-30, CROSSREF. [44] M.P. Perrone and L.N. Cooper, (1993) “When Networks Disagree: Ensemble Methods for Hybrid Neural Networks,” Neural Networks for Speech and Image Processing, ChapmanHill. [45] Y. Sun, M.S. Kamel, A.K. Wong, and Y. Wang, (2007). “Cost-Sensitive Boosting for Classification of Imbalanced Data,” Pattern Recognition, vol. 40, no. 12, pp. 3358-3378, Elsevier. [46] Hansen, L., & Salamon, P. (1990). “Neural Network Ensembles”, IEEE Trans Pattern Analysis and Machine Intelligence , 12, pp. 993-1001. IEEE. [47] [49] A. Rahman and B. Verma, (2013). “Ensemble Classifier Generation using Non– Uniform Layered Clustering and Genetic Algorithm”, Elsevier Knowledge Based Systems, vol. 43, pp. 30 – 42, Elsevier. [48] Antonino A. Feitosa Neto, Anne M. P. Canuto and Teresa B Ludermir, (2013). “Using Good and Bad Diversity Measures in the design of Ensemble Systems: A Genetic Algorithm Approach”, IEEE Congress on Evolutionary Computation, pp. 789 – 796. IEEE. [49] Sylvester, J., Chawla, N. (2006). “Evolutionary Ensemble Creation and Thinning”, In: IJCNN 06 International Joint Conference on Neural Networks, pp. 5148- 5155, IEEE. [50] Windeatt, T., & Zor, C. (2013). “Ensemble Pruning using Spectral Coefficients”, IEEE Transactions on Neural Networks and Learning Systems, volume 24, Issue 4, pp. 673-678, IEEE. [51] R.E. Schapire, Y. Freund, P. Bartlett, and W.S. Lee,( 1998). “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods”, The Ann. Statistics, vol. 26, no. 5, pp. 1651-1686, JSTOR. [52] C.X. Ling, J. Huang, and H. Zhang, (2003). “AUC: A Statistically Consistent and More Discriminating Measure Than Accuracy,” Proceedings of the 18th international joint conference on Artificial intelligence (IJCAI ’03), pp. 329-341, Morgan Kaufmann Publishers Inc. San Francisco, CA, US. [53] A.P. Bradley, (1997). “The Use of the Area under the Roc Curve in the Evaluation of Machine Learning Algorithms”, Pattern Recognition, vol. 30, no. 7, pp. 1145-1159, Elsevier Science Inc. New York, NY, USA. [54] H. He and E.A. Garcia, (2009). “Learning from Imbalanced Data,” IEEE Trans. Knowledge and Data Eng., vol. 21, no. 9, pp. 1263-1284, IEEE. [55] N.V. Chawla and J. Sylvester, (2007). “Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets,” Proceedings of the 7th international conference on Multiple classifier systems, vol. 4472, pp. 397-406, Springer-Verlag Berlin, Heidelberg. [56] M.V. Joshi, (2002). “On Evaluating Performance of Classifiers for Rare Classes,” Proc. IEEE Int’l Conf. Data Mining, pp. 641- 661, IEEE. [57] Data Mining with Weka MOOC Material http://www.cs.waikato.ac.nz/ml/weka/mooc/da taminingwithweka/transcripts/Transcript4-6.txt (3 January, 2015). [58] T. Yamaguchi, K.J. Mackin, E. Nunohiro, J.G. Park, K. Hara, K. Matsushita, M. Ohshiro, K. Yamasaki, (2009). “Artificial Neural Network Ensemble-Based Land-Cover Classifiers using Modis Data”, Artificial Life and Robotics, vol. 13, issue 2, pp. 570–574. [59] L.I. Kuncheva, J.J. Rodriguez, C.O. Plumpton, D.E. Linden, S.J. Johnston, (2010). “Random Subspace Ensembles for FMRI Classification”, IEEE Transaction on Medical Imaging, vol. 29, issue 2, pp. 531–542, IEEE. [60] T. K. Ho, (1998). “Random Subspace Method for Constructing Decision Forests,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832–844, IEEE. [61] D. de Oliveira, A. Canuto, and M. de Souto, (2009)., “Use Of Multi-Objective Genetic Algorithms To Investigate The Diversity/Accuracy Dilemma in Heterogeneous Ensembles,” in International Joint Conference on Neural Networks (IJCNN), pp. 2339 –2346. IEEE. [62] T. Windeatt, (2005). “Diversity Measures for Multiple Classifier System Analysis and Design”, Information Fusion, Volume 6, Issue 1, Pages 21–36, Elsevier B.V. [63] David E. Goldberg. (1989). “Genetic Algorithms in Search, Optimization and Machine Learning”, (1st ed.). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA. [64] John H. Holland. (1992) “Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence”, MIT Press, Cambridge, MA, USA. [65] M. L. Raymer, W. F. Punch, E. D. Goodman, L. A. Kuhn, and A. K. Jain, (2000). “Dimensionality Reduction Using Genetic Algorithms”, IEEE Transactions on Evolutionary Computation, vol. 4, no. 2, pp. 164–171, IEEE. [66] Santos, E., Sabourin, R., Maupin, P. (2006). “Single and Multi-Objective Genetic Algorithms for the Selection of Ensemble of Classifiers”, In: International Joint Conference on Neural Networks, pp. 3070-3077, IEEE. [67] Bai, Q. (2010). “Analysis of Particle Swarm Optimization Algorithm”. Computer and Information Science, Vol 3, No 1, pp. 180 – 184, Canadian Center of Science and Education. [68] Shailendra S. Aote et al. (2013) “A Brief Review on Particle Swarm Optimization: Limitations & Future Directions”, International Journal of Computer Science Engineering (IJCSE), Volume 14– No.1, pp. 196-200. [69] Chen, YiJun, and Man-Leung Wong. (2012). "Applying Ant Colony Optimization in Configuring Stacking Ensemble", Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th International Conference on. IEEE. [70] Davoian, K., Reichel, A., Wolfram-Manfred, L. (2006). “Comparison and Analysis of Mutation-Based Evolutionary Algorithms for Ann Parameters Optimization”. In Crone, S.F. Lessmann, S., Stahlbock, R. (eds.) International Conference on Data Mining (Las Vegas, Nevada, USA). pp. 51-56, CSREA Press. [71] Batchis, P. (2013). “An Evolutionary Algorithm for Neural Network Learning using Direct Encoding”. Resource 53, Chinese Digital Library, Available online: www.cs.rutgers.edu/~mlittman/courses/ml03/i CML03/.../batc his.pdf. Accessed January 16 [72] Azzini, A. (2006). “A New Genetic Approach for Neural Network Design and Optimization”. PhD Dissertation, Universita Degli Studi Di Milano, pp. 38. [73] Gao, W. (2012). “Study on New Improved Hybrid Genetic Algorithm”, In: Zeng, D. (ed.) Advances in Information Technology and Industry Applications. Lecture Notes in Electrical Engineering 136, Springer, Volume 136, pp. 505-512. Springer Berlin Heidelberg. [74] Lee, M. et al. (2008). “A Two-Step Approach for Feature Selection and Classifier Ensemble Construction in Computer-Aided Diagnosis”, IEEE International Symposium on ComputerBased Medical System, pp. 548-553, Albuquerque: IEEE Computer Society. [75] L Oliveira, M Morita and R Sabourin, (2006) “Feature Selection for Ensembles Applied to Handwriting Recognition”, International Journal of Document Analysis and Recognition (IJDAR), Volume 8, Number 4, pp. 262-279, Springer-Verlag [76] K Robbins, W Zhang and J Bertrand, (2007). “The Ant Colony Algorithm for Feature Selection in High-Dimension Gene Expression Data for Disease Classification”. Mathematical Medicine and Biology pp. 413- 426. Oxford University Press. [77] H Kanan and K Faez. (2008). “An Improved Feature Selection Method Based on Ant Colony Optimization (ACO) Evaluated on Face Recognition System”, Applied Mathematics and Computation, Volume 205, Issue 2, pp. 716-725, Elsevier.
|
| originalfilename |
6863-01-FH02-FIK-15-04265.jpg
|
| person |
norman
|
| recordtype |
oai_dc
|
| resourceurl |
https://intelek.unisza.edu.my/intelek/pages/view.php?ref=12556
|
| spelling |
12556 https://intelek.unisza.edu.my/intelek/pages/view.php?ref=12556 https://intelek.unisza.edu.my/intelek/pages/search.php?search=!collection407072 Restricted Document Article Journal image/jpeg inches 96 96 norman 1415 772 73 73 2015-12-02 12:33:34 1415x772 6863-01-FH02-FIK-15-04265.jpg UniSZA Private Access Current issues in ensemble methods and its applications Journal of Theoretical and Applied Information Technology This paper reviewed the current state-of-the-art of optimization of ensemble methods so as to provide us with a better direction of how we will conduct our research in the future. The primary aim of ensemble method is to integrate a set of models that are used for solving different tasks so as to come up with enhanced composite global model, which produces higher accuracy and reliable estimate than what can be achieved through a single model. Diversity, combination strategies, number of based classifiers, types of ensemble, and performance measures are the key factors to be considered in the build of committees. When the numbers of base classifiers become huge, ensemble methods incurred high storage space and computational time, selective ensemble is proposed by most literatures to solve these problems. In terms of optimization techniques, multi-objectives techniques have become the better ones to use due to their efficiency in terms of optimization process and they provide a set of near optimal solution instead of just a single solution. When comparing the performance of ensemble methods, most of the time, accuracy alone cannot differentiate which classifiers perform best; for this reason, other performance measures such as AUC, F-measure, TPR, TNR, FPR, FNR, RMSE were used. Based on the reviewed literatures, we concluded that in our proposed methodology we would come up with a new method for comparing and searching for relevant classifiers from a collection of models that would be used as a model for predicting the quality of water to achieve higher performance rate than other previous work. 81 2 266-276 [1] Zhao Hui. (2013). “Intrusion Detection Ensemble Algorithm Based on Bagging and Neighborhood Rough Set”, International Journal of Security and Its Applications (IJSIA), Vol.7, No.5, pp. 193-204, SERSC. [2] Chen Tao. (2011), “Selective SVM Ensemble Based on Accelerating Genetic Algorithm”, Application Research of Computers, Issue 1, pp. 139-141, Ori Probe Information Service. [3] Zhou, Z. H., & Tang, W. (2003). “Selective Ensemble of Decision Trees”, In Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Lecture Notes in Computer Science Volume 2639, pp. 476-483, Springer Berlin Heidelberg. [4] Dietterich T., (2000). “Ensemble Methods in Machine Learning”, In J. Kittler and F.Roll, editors, First International Workshop on Multiple Classifier Systems, Lecture Notes in Computer Science, pp. 1-15. Springer-Verlag. [5] Ricardo Gutierrez-Osuna () “L25: Ensemble Learning”, CSCE 666 Pattern Analysis CSE@TAMU” Lecture Notes pp. 1 – 15, Texas A&M University. Available online at: http://search.kedirijaya.com/detail/research.cs.t amu.edu/prism/lectures/pr/pr_l25.pdf. Accessed on December, 15,2014. [6] Polikar, R. (2012). “Ensemble learning”, In C. Zhang and Y. Ma (eds.), Ensemble Machine Learning: Methods and Applications, Springer Science + Business Media, LLC 2012, pp. 1-34, Springer US. [7] Santana, L. E. A., Silva, L., Canuto, A. M., Pintro, F., & Vale, K. O. (2010). “A Comparative Analysis of Genetic Algorithm and Ant Colony Optimization to Select Attributes for an Heterogeneous Ensemble of Classifiers”, In Evolutionary Computation (CEC), 2010 IEEE Congress on pp. 1-8, IEEE. [8] Neto, A. A. F., & Canuto, A. M. (2014). “Meta-Learning and Multi-Objective Optimization to Design Ensemble of Classifiers”, 2014 Brazilian Conference on Intelligent Systems. pp. 91 – 96, IEEE. [9] R. E. Banfield, L. O. Hall, K. W. Bowyer and W. P. Kegelmeyer (2002). “Ensemble Diversity Measures and Their Application to Thinning”, Information Fusion, vol. 6, no. 1, pp. 49-62, Elsevier B.V. [10] Quinlan, J. R., (1996). “Bagging, Boosting, and C4.5”, In Proceedings of the Thirteenth National Conference on Artificial Intelligence, Vol. 1. AAAI Press, pp. 725-730, ACM Digital Library. [11] Opitz, D. and Maclin, R., (1999). “Popular Ensemble Methods: An Empirical Study”, Journal Of Artificial Intelligence Research, Volume 11, pages 169-198, Cornel University Library. [12] Kuncheva, L. I., & Whitaker, C. J. (2003). “Measures of Diversity in Classifier Ensembles and Their Relationships with the Ensemble Accuracy”, Machine learning, Volume 51, Issue 2, pp.181-207, Kluwer Academic Publisher. [13] Wang, W. (2010). “Heterogeneous Bayesian Ensembles for Classifying Spam Emails”, The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, IEEE. [14] Wang, W. (2008). “Some Fundamental Issues in Ensemble Methods”, In Neural Networks, IJCNN. IEEE World Congress on Computational Intelligence. IEEE International Joint Conference on pp. 2243-2250. IEEE. [15] Wang, S., & Yao, X. (2013). “Relationships Between Diversity of Classification Ensembles and Single-Class Performance Measures”, Knowledge and Data Engineering, IEEE Transactions on, vol. 25, No. 1, pp. 206- 219. IEEE. [16] L. I. Kuncheva, (2005). "Combining Pattern Classifiers: Methods and Algorithms", New York: Wiley. [17] K. Tang, P.N. Suganthan, and X. Yao, (2006). “An Analysis of Diversity Measures,” Machine Learning, vol. 65, pp. 247-271, Springer. [18] K. Ghosh, Y.S. Ng, and R. Srinivasan, (2011). "Evaluation of Decision Fusion Strategies for Effective Collaboration among Heterogeneous Fault Diagnostic Methods", Computers and Chemical Engineering, Volume 35, Issue 2, 9 February 2011, Pages 342–355, Elsevier. [19] Woods, K., Kegelmeyer, W. P., & Bowyer, K. (1997). “Combination of Multiple Classifiers Using Local Accuracy Estimates”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, pp. 405-410. IEEE. [20] Shruti Asmita and K k Shukla. (2014). “Review on the Architecture, Algorithm and Fusion Strategies in Ensemble Learning”, International Journal of Computer Applications, Volume 108, Number 8, pp. 21-28. [21] Kuncheva, L. I. (2004). “Combining Pattern Classifier”, Methods and Algorithms. John Wiley & Sons. [22] Shafer, G. (1976). “A Mathematical Theory of Evidence”, Vol. 1, Princeton: Princeton University Press. [23] G. Brown, J.L. Wyatt, and P. Tino, (2005). “Managing Diversity in Regression Ensembles,” The Journal of Machine Learning Research, vol. 6, pp. 1621-1650, ACM Digital Library. [24] Makhtar, M., Yang, L., Neagu, D., & Ridley, M. (2012). “Optimisation of Classifier Ensemble for Predictive Toxicology Applications”, In Computer Modelling and Simulation (UKSim), 2012 UKSim 14th International Conference on pp. 236-241. IEEE. [25] J. Han, M. Kamber, and J. Pei, (2006). “Data Mining: Concepts and Techniques”, 2nd edition, Morgan Kaufmann. [26] S. Kotsiantis, D. Kanellopoulos, and P. Pintelas, (2006) “Data Preprocessing for Supervised Leaning”, International Journal of Computer Science, vol.1, no.2, pp.111–117, World Enformatika Society. [27] Tebbutt TH (1983). “Principles of Water Quality Control”. 3rd Edn. pp. 42. Pergaman Press Oxford. [28] J. Das and B. C. Acharya, (2003). “Hydrology and Assessment of Lotic Water Quality in Cuttack City, India,” Water, Air, and Soil Pollution, vol. 150, no. 1–4, pp. 163–175, Springer. [29] Optimization Models https://inst.eecs.berkeley.edu/~ee127a/book/log in/l_intro_main.html , (visited on 10 January, 2015) [30] Gupta, A., & Thakkar, A. R. (2014). Optimization of Stacking Ensemble Configuration Based on Various Meta heuristic Algorithms. In Advance Computing Conference (IACC), IEEE International pp. 444-451, IEEE. [31] Anwar, H., Qamar, U., & Muzaffar Qureshi, A. W. (2014). Global Optimization Ensemble Model for Classification Methods. The Scientific World Journal, Hindawi Publishing Corporation [32] P. Kraipeerapun, S. Amornsamankul, “Prediction of WQI for Tha Chin River Using an Ensemble of Support Vector Regression and Complementary Neural Networks”, Recent Advances in Information Science, Proceedings of the 7th European Computing Conference (ECC '13), pp. 36 – 41. [33] Charkhabi, M., Dhot, T., & Mojarad, S. A. (2014). “Cluster Ensembles, Majority Vote, Voter Eligibility and Privileged Voters”, International Journal of Machine Learning & Computing, Vol. 4, No.3, pp. 275 – 278. [34] Wahid, A., Gao, X., & Andreae, P. (2014). “Multi-View Clustering of Web Documents using Multi-Objective Genetic Algorithm”, In Evolutionary Computation (CEC), 2014 IEEE Congress, pp. 2625-2632. IEEE. [35] Tao Chen A, (2014). “Selective Ensemble Classification Method on Microarray Data”,Journal of Chemical and Pharmaceutical Research, JCPRC5, vol. 6(6), pp. 2860-2866, CODEN(USA). [36] Anifowose, F., Labadin, J., & Abdulraheem, A. (2013). “Predicting Petroleum Reservoir Properties from Downhole Sensor Data using an Ensemble Model of Neural Networks”, In Proceedings of Workshop on Machine Learning for Sensory Data Analysis, pp. 27 - 34. ACM. [37] Wei, S., Cheng, L., Huang, W., & Gu, H. (2014). “A New Approximate Gradient Algorithm Applied in Constrained Reservoir Production Optimization”, Journal of Industrial and Intelligent Information Vol. 2, No.3, pp. 194 – 199, Engineering and Technology Publishing. [38] Rahman, A., D'Este, C., & McCulloch, J. (2013). “Ensemble Feature Ranking for Shellfish Farm Closure Cause Identification”, In Proceedings of Workshop on Machine Learning for Sensory Data Analysis pp. 13 – 18, ACM Digital Library. [39] Lacoste, A., Larochelle, H., Laviolette, F., & Marchand, M. (2014). Sequential Model-Based Ensemble Optimization. arXiv preprint arXiv:1402.0796. [40] Zeng, B., Luo, Z., & Wei, J. (2008). “Sea Water Pollution Assessment Based on Ensemble of Classifiers”, In Natural Computation, 2008. ICNC'08. Fourth International Conference on Vol. 1, pp. 241- 245. IEEE. [41] Makhtar, M., Neagu, D. C., & Ridley, M. J. (2011). “Comparing Multi-Class Classifiers: on the Similarity of Confusion Matrices for Predictive Toxicology Applications”, In Intelligent Data Engineering and Automated Learning Ideal 2011, pp. 252-261. Springer Berlin Heidelberg. [42] Anifowose, F., Labadin, J., & Abdulraheem, A. (2013). “Ensemble Model of Artificial Neural Networks with Randomized Number of Hidden Neurons”. In Information Technology in Asia (CITA), 2013 8th International Conference on pp. 1-5. IEEE. [43] Bharathidason, S., & Jothi Venkataeswaran, C. (2014). “Improving Classification Accuracy based on Random Forest Model with Uncorrelated High Performing Trees”, International Journal of Computer Applications, vol. 101, issue 13, pp. 26-30, CROSSREF. [44] M.P. Perrone and L.N. Cooper, (1993) “When Networks Disagree: Ensemble Methods for Hybrid Neural Networks,” Neural Networks for Speech and Image Processing, ChapmanHill. [45] Y. Sun, M.S. Kamel, A.K. Wong, and Y. Wang, (2007). “Cost-Sensitive Boosting for Classification of Imbalanced Data,” Pattern Recognition, vol. 40, no. 12, pp. 3358-3378, Elsevier. [46] Hansen, L., & Salamon, P. (1990). “Neural Network Ensembles”, IEEE Trans Pattern Analysis and Machine Intelligence , 12, pp. 993-1001. IEEE. [47] [49] A. Rahman and B. Verma, (2013). “Ensemble Classifier Generation using Non– Uniform Layered Clustering and Genetic Algorithm”, Elsevier Knowledge Based Systems, vol. 43, pp. 30 – 42, Elsevier. [48] Antonino A. Feitosa Neto, Anne M. P. Canuto and Teresa B Ludermir, (2013). “Using Good and Bad Diversity Measures in the design of Ensemble Systems: A Genetic Algorithm Approach”, IEEE Congress on Evolutionary Computation, pp. 789 – 796. IEEE. [49] Sylvester, J., Chawla, N. (2006). “Evolutionary Ensemble Creation and Thinning”, In: IJCNN 06 International Joint Conference on Neural Networks, pp. 5148- 5155, IEEE. [50] Windeatt, T., & Zor, C. (2013). “Ensemble Pruning using Spectral Coefficients”, IEEE Transactions on Neural Networks and Learning Systems, volume 24, Issue 4, pp. 673-678, IEEE. [51] R.E. Schapire, Y. Freund, P. Bartlett, and W.S. Lee,( 1998). “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods”, The Ann. Statistics, vol. 26, no. 5, pp. 1651-1686, JSTOR. [52] C.X. Ling, J. Huang, and H. Zhang, (2003). “AUC: A Statistically Consistent and More Discriminating Measure Than Accuracy,” Proceedings of the 18th international joint conference on Artificial intelligence (IJCAI ’03), pp. 329-341, Morgan Kaufmann Publishers Inc. San Francisco, CA, US. [53] A.P. Bradley, (1997). “The Use of the Area under the Roc Curve in the Evaluation of Machine Learning Algorithms”, Pattern Recognition, vol. 30, no. 7, pp. 1145-1159, Elsevier Science Inc. New York, NY, USA. [54] H. He and E.A. Garcia, (2009). “Learning from Imbalanced Data,” IEEE Trans. Knowledge and Data Eng., vol. 21, no. 9, pp. 1263-1284, IEEE. [55] N.V. Chawla and J. Sylvester, (2007). “Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets,” Proceedings of the 7th international conference on Multiple classifier systems, vol. 4472, pp. 397-406, Springer-Verlag Berlin, Heidelberg. [56] M.V. Joshi, (2002). “On Evaluating Performance of Classifiers for Rare Classes,” Proc. IEEE Int’l Conf. Data Mining, pp. 641- 661, IEEE. [57] Data Mining with Weka MOOC Material http://www.cs.waikato.ac.nz/ml/weka/mooc/da taminingwithweka/transcripts/Transcript4-6.txt (3 January, 2015). [58] T. Yamaguchi, K.J. Mackin, E. Nunohiro, J.G. Park, K. Hara, K. Matsushita, M. Ohshiro, K. Yamasaki, (2009). “Artificial Neural Network Ensemble-Based Land-Cover Classifiers using Modis Data”, Artificial Life and Robotics, vol. 13, issue 2, pp. 570–574. [59] L.I. Kuncheva, J.J. Rodriguez, C.O. Plumpton, D.E. Linden, S.J. Johnston, (2010). “Random Subspace Ensembles for FMRI Classification”, IEEE Transaction on Medical Imaging, vol. 29, issue 2, pp. 531–542, IEEE. [60] T. K. Ho, (1998). “Random Subspace Method for Constructing Decision Forests,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832–844, IEEE. [61] D. de Oliveira, A. Canuto, and M. de Souto, (2009)., “Use Of Multi-Objective Genetic Algorithms To Investigate The Diversity/Accuracy Dilemma in Heterogeneous Ensembles,” in International Joint Conference on Neural Networks (IJCNN), pp. 2339 –2346. IEEE. [62] T. Windeatt, (2005). “Diversity Measures for Multiple Classifier System Analysis and Design”, Information Fusion, Volume 6, Issue 1, Pages 21–36, Elsevier B.V. [63] David E. Goldberg. (1989). “Genetic Algorithms in Search, Optimization and Machine Learning”, (1st ed.). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA. [64] John H. Holland. (1992) “Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence”, MIT Press, Cambridge, MA, USA. [65] M. L. Raymer, W. F. Punch, E. D. Goodman, L. A. Kuhn, and A. K. Jain, (2000). “Dimensionality Reduction Using Genetic Algorithms”, IEEE Transactions on Evolutionary Computation, vol. 4, no. 2, pp. 164–171, IEEE. [66] Santos, E., Sabourin, R., Maupin, P. (2006). “Single and Multi-Objective Genetic Algorithms for the Selection of Ensemble of Classifiers”, In: International Joint Conference on Neural Networks, pp. 3070-3077, IEEE. [67] Bai, Q. (2010). “Analysis of Particle Swarm Optimization Algorithm”. Computer and Information Science, Vol 3, No 1, pp. 180 – 184, Canadian Center of Science and Education. [68] Shailendra S. Aote et al. (2013) “A Brief Review on Particle Swarm Optimization: Limitations & Future Directions”, International Journal of Computer Science Engineering (IJCSE), Volume 14– No.1, pp. 196-200. [69] Chen, YiJun, and Man-Leung Wong. (2012). "Applying Ant Colony Optimization in Configuring Stacking Ensemble", Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th International Conference on. IEEE. [70] Davoian, K., Reichel, A., Wolfram-Manfred, L. (2006). “Comparison and Analysis of Mutation-Based Evolutionary Algorithms for Ann Parameters Optimization”. In Crone, S.F. Lessmann, S., Stahlbock, R. (eds.) International Conference on Data Mining (Las Vegas, Nevada, USA). pp. 51-56, CSREA Press. [71] Batchis, P. (2013). “An Evolutionary Algorithm for Neural Network Learning using Direct Encoding”. Resource 53, Chinese Digital Library, Available online: www.cs.rutgers.edu/~mlittman/courses/ml03/i CML03/.../batc his.pdf. Accessed January 16 [72] Azzini, A. (2006). “A New Genetic Approach for Neural Network Design and Optimization”. PhD Dissertation, Universita Degli Studi Di Milano, pp. 38. [73] Gao, W. (2012). “Study on New Improved Hybrid Genetic Algorithm”, In: Zeng, D. (ed.) Advances in Information Technology and Industry Applications. Lecture Notes in Electrical Engineering 136, Springer, Volume 136, pp. 505-512. Springer Berlin Heidelberg. [74] Lee, M. et al. (2008). “A Two-Step Approach for Feature Selection and Classifier Ensemble Construction in Computer-Aided Diagnosis”, IEEE International Symposium on ComputerBased Medical System, pp. 548-553, Albuquerque: IEEE Computer Society. [75] L Oliveira, M Morita and R Sabourin, (2006) “Feature Selection for Ensembles Applied to Handwriting Recognition”, International Journal of Document Analysis and Recognition (IJDAR), Volume 8, Number 4, pp. 262-279, Springer-Verlag [76] K Robbins, W Zhang and J Bertrand, (2007). “The Ant Colony Algorithm for Feature Selection in High-Dimension Gene Expression Data for Disease Classification”. Mathematical Medicine and Biology pp. 413- 426. Oxford University Press. [77] H Kanan and K Faez. (2008). “An Improved Feature Selection Method Based on Ant Colony Optimization (ACO) Evaluated on Face Recognition System”, Applied Mathematics and Computation, Volume 205, Issue 2, pp. 716-725, Elsevier.
|
| spellingShingle |
Current issues in ensemble methods and its applications
|
| summary |
This paper reviewed the current state-of-the-art of optimization of ensemble methods so as to provide us with a better direction of how we will conduct our research in the future. The primary aim of ensemble method is to integrate a set of models that are used for solving different tasks so as to come up with enhanced composite global model, which produces higher accuracy and reliable estimate than what can be achieved through a single model. Diversity, combination strategies, number of based classifiers, types of ensemble, and performance measures are the key factors to be considered in the build of committees. When the numbers of base classifiers become huge, ensemble methods incurred high storage space and computational time, selective ensemble is proposed by most literatures to solve these problems. In terms of optimization techniques, multi-objectives techniques have become the better ones to use due to their efficiency in terms of optimization process and they provide a set of near optimal solution instead of just a single solution. When comparing the performance of ensemble methods, most of the time, accuracy alone cannot differentiate which classifiers perform best; for this reason, other performance measures such as AUC, F-measure, TPR, TNR, FPR, FNR, RMSE were used. Based on the reviewed literatures, we concluded that in our proposed methodology we would come up with a new method for comparing and searching for relevant classifiers from a collection of models that would be used as a model for predicting the quality of water to achieve higher performance rate than other previous work.
|
| title |
Current issues in ensemble methods and its applications
|
| title_full |
Current issues in ensemble methods and its applications
|
| title_fullStr |
Current issues in ensemble methods and its applications
|
| title_full_unstemmed |
Current issues in ensemble methods and its applications
|
| title_short |
Current issues in ensemble methods and its applications
|
| title_sort |
current issues in ensemble methods and its applications
|