Targeted ranking-based clustering using AHP K-means

Bibliographic Details
Format: Restricted Document
_version_ 1860797413073092608
building INTELEK Repository
collection Online Access
collectionurl https://intelek.unisza.edu.my/intelek/pages/search.php?search=!collection407072
date 2015-12-30 12:11:09
format Restricted Document
id 12615
institution UniSZA
internalnotes [1] A. K. Jain, Jun. 2010, “Data clustering: 50 years beyond K-means,” Pattern Recognition Letters, vol. 31, no. 8, pp. 651–666. [2] R. Giancarlo, G. Lo Bosco, and L. Pinello, “Distance functions, clustering algorithms and microarray data analysis,” in Proceedings of the 4th international conference on Learning and intelligent optimization, 2010, pp. 125–138. [3] S. Chormunge and S. Jena, “Evaluation of Clustering Algorithms for High Dimensional Data Based on Distance Functions,” in Proceedings of the 2014 International Conference on Information and Communication Technology for Competitive Strategies, 2014, p. Article No. 78. [4] A. Singh, a. Yadav, and A. Rana, 2013, “K-means with Three different Distance Metrics,” International Journal of Computer Applications, vol. 67, no. 10, pp. 13–17. [5] D. Hand, H. Mannila, and P. Smyth, 2001, Principles of Data Mining. 2001, MA : MIT Press. [6] D. Ienco, R. G. Pensa, and R. Meo, 2012, “From Context to Distance,” ACM Transactions on Knowledge Discovery from Data, vol. 6, no. 1, pp. 1–25. [7] S. Poomagal and T. Hamsapriya, “Optimized k-means clustering with intelligent initial centroid selection for web search using URL and tag contents,” in Proceedings of the International Conference on Web Intelligence, Mining and Semantics - WIMS ’11, 2011, p. Article No. 65. [8] M. Erisoglu, N. Calis, and S. Sakallioglu, 2011, “A new algorithm for initial cluster centers in k-means algorithm,” Pattern Recognition Letters, vol. 32, no. 14, pp. 1701–1705. [9] K. Lin, Z. Zhang, and J. Chen, “A K-means Clustering with Optimized Initial Center Based on Hadoop Platform,” in The 9th International Conference on Computer Science & Education, 2014, no. 978, pp. 263–266. [10] G. Shi, B. Gao, and L. Zhang, “The Optimized K-means Algorithms for Improving Randomly-initialed Midpoints,” in 2nd International Conference on Measurement, Information and COntrol, 2013, pp. 1212–1216. [11] X. Chen and Y. Xu, 2009, “K-means clustering algorithm with refined initial center,” Proceedings of the 2009 2nd International Conference on Biomedical Engineering and Informatics, BMEI 2009, no. 1, pp. 1–4. [12] H. Ai and W. Li, “K-means Initial Clustering Center Optimal Algorithm Based on Estimating Density and Refining Initial,” in 6th International Conference on New Trends in Information Science and Service Science and Data Mining (ISSDM), 2012, pp. 603–606. [13] D. Reddy and P. K. Jana, 2012, “Initialization for K-means Clustering using Voronoi Diagram,” Procedia Technology, vol. 4, pp. 395–400. [14] S. S. Khan and A. Ahmad, 2013, “Cluster center initialization algorithm for K-modes clustering,” Expert Systems with Applications, vol. 40, no. 18, pp. 7444–7456. [15] S. Dhanabal and S. Chandramathi, 2013, “An Efficient k-Means Initialization Using Minimum-Average-Maximum (MAM) Method,” Asian Journal of Information Technology, vol. 12, no. 2, pp. 77–82. [16] D. Mavroeidis and E. Marchiori, May 2013, “Feature selection for k-means clustering stability: theoretical analysis and an algorithm,” Data Mining and Knowledge Discovery, vol. 28, no. 4, pp. 918–960. [17] J. Z. Huang, M. K. Ng, H. Rong, and Z. Li, 2005, “Automated Variable Weighting in k -Means Type Clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 657–668. [18] X. Chen, X. Xu, J. Z. Huang, and Y. Ye, 2013, “TW- k -Means : Automated Two-Level Variable Weighting Clustering Algorithm for Multiview Data,” IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, vol. 25, no. 4, pp. 932–944. [19] A. M. Baswade, K. D. Joshi, and P. S. Nalwade, 2012, “A Comparative Study Of K-Means And Weighted K-Means For Clustering,” International Journal of Engineering Research and Technology, vol. 1, no. 10, pp. 1–4. [20] X. C. X. Chen, W. Y. W. Yin, P. T. P. Tu, and H. Z. H. Zhang, 2009, “Weighted k-Means Algorithm Based Text Clustering,” 2009 International Symposium on Information Engineering and Electronic Commerce, pp. 51–55. [21] S. Miyamoto, M. Yamazaki, and W. Hashimoto, “Fuzzy Semi-supervised Clustering with Target Clusters Using Different Additional Terms,” in IEEE International Conference on Granular Computing, 2009, GRC ’09, 2009, vol. 1, no. 1, pp. 444–449. [22] H. Liu and X. Yu, “Application Research of k-means Clustering Algorithm in Image Retrieval System,” in Proceedings of the Second Symposium International Computer Science and Computational Technology(ISCSCT ’09), 2009, vol. 7, pp. 274–277. [23] A. Rad, B. Naderi, and M. Soltani, 2011, “Clustering and ranking university majors using data mining and AHP algorithms: A case study in Iran,” Expert Systems with Applications, vol. 38, no. 1, pp. 755–763. [24] A. H. Azdnia, P. Ghadimi, and M. M. Aghdam, “A Hybrid Model of Data Mining and MCDM Methods for Estimating Customer Lifetime Value Amir Hossein Azadnia,” in The 41st International Conference on Computers and Industrial Engineering, 2011, pp. 80–85. [25] K. Kumar and S. Kumanan, 2012, “Decision Making in Location Selection: An Integrated Approach with Clustering and TOPSIS,” The IUP Journal of Operations Management, vol. 11, no. 1, pp. 7–20. [26] B. Dong, P. Gao, H. Wang, and S. Liao, “Clustering Human Wrist Pulse Signals via Multiple Criteria Decision Making,” in 2014 IEEE 26th International Conference on Tools with Artificial Intelligence, 2014, pp. 243–250. [27] C. Bai, D. Dhavale, and J. Sarkis, 2014, “Expert Systems with Applications Integrating Fuzzy C-Means and TOPSIS for performance evaluation : An application and comparative analysis,” EXPERT SYSTEMS WITH APPLICATIONS, vol. 41, no. 9, pp. 4186–4196. [28] T. L. Saaty, 2008, “Decision making with the analytic hierarchy process,” International Journal of Services Sciences, vol. 1, no. 1, pp. 83–97.
originalfilename 6922-01-FH02-FIK-15-04680.jpg
person norman
recordtype oai_dc
resourceurl https://intelek.unisza.edu.my/intelek/pages/view.php?ref=12615
spelling 12615 https://intelek.unisza.edu.my/intelek/pages/view.php?ref=12615 https://intelek.unisza.edu.my/intelek/pages/search.php?search=!collection407072 Restricted Document Article Journal image/jpeg inches 96 96 norman 10 10 766 1429 2015-12-30 12:11:09 1429x766 6922-01-FH02-FIK-15-04680.jpg UniSZA Private Access Targeted ranking-based clustering using AHP K-means International Journal of Advances in Soft Computing and its Applications K-Means can group similar objects features into specified number (K) of cluster centers region. Similarity is measured based on their closest distance of multiple features coordinate location. However, such distance measurement can be doubtful in satisfying certain clustering application as it does not distinguish the meaning of object features representation. Ordinal feature for example may denote to certain ranking objects rather than just number representation. Thus, clustering result should also consider the existing rank label on these objects instead of distance measurement. New AHP K-Means technique is proposed to preserve rank order for each object in the clustering result. It transforms weighted multi-features objects by aggregating them as a single ranking objects using pair-wise comparing among the objects. These ranking objects are then processed by K-Means based on cluster centers that initially setup on fair distributed ranking scale. Based on experiment using weighted course marks of 92 students, the proposed technique shows that ranking-based clustering using AHP can give accurate ranked clustering result compared to normal weighted K-Means. 7 3 International Center for Scientific Research and Studies International Center for Scientific Research and Studies 100-113 [1] A. K. Jain, Jun. 2010, “Data clustering: 50 years beyond K-means,” Pattern Recognition Letters, vol. 31, no. 8, pp. 651–666. [2] R. Giancarlo, G. Lo Bosco, and L. Pinello, “Distance functions, clustering algorithms and microarray data analysis,” in Proceedings of the 4th international conference on Learning and intelligent optimization, 2010, pp. 125–138. [3] S. Chormunge and S. Jena, “Evaluation of Clustering Algorithms for High Dimensional Data Based on Distance Functions,” in Proceedings of the 2014 International Conference on Information and Communication Technology for Competitive Strategies, 2014, p. Article No. 78. [4] A. Singh, a. Yadav, and A. Rana, 2013, “K-means with Three different Distance Metrics,” International Journal of Computer Applications, vol. 67, no. 10, pp. 13–17. [5] D. Hand, H. Mannila, and P. Smyth, 2001, Principles of Data Mining. 2001, MA : MIT Press. [6] D. Ienco, R. G. Pensa, and R. Meo, 2012, “From Context to Distance,” ACM Transactions on Knowledge Discovery from Data, vol. 6, no. 1, pp. 1–25. [7] S. Poomagal and T. Hamsapriya, “Optimized k-means clustering with intelligent initial centroid selection for web search using URL and tag contents,” in Proceedings of the International Conference on Web Intelligence, Mining and Semantics - WIMS ’11, 2011, p. Article No. 65. [8] M. Erisoglu, N. Calis, and S. Sakallioglu, 2011, “A new algorithm for initial cluster centers in k-means algorithm,” Pattern Recognition Letters, vol. 32, no. 14, pp. 1701–1705. [9] K. Lin, Z. Zhang, and J. Chen, “A K-means Clustering with Optimized Initial Center Based on Hadoop Platform,” in The 9th International Conference on Computer Science & Education, 2014, no. 978, pp. 263–266. [10] G. Shi, B. Gao, and L. Zhang, “The Optimized K-means Algorithms for Improving Randomly-initialed Midpoints,” in 2nd International Conference on Measurement, Information and COntrol, 2013, pp. 1212–1216. [11] X. Chen and Y. Xu, 2009, “K-means clustering algorithm with refined initial center,” Proceedings of the 2009 2nd International Conference on Biomedical Engineering and Informatics, BMEI 2009, no. 1, pp. 1–4. [12] H. Ai and W. Li, “K-means Initial Clustering Center Optimal Algorithm Based on Estimating Density and Refining Initial,” in 6th International Conference on New Trends in Information Science and Service Science and Data Mining (ISSDM), 2012, pp. 603–606. [13] D. Reddy and P. K. Jana, 2012, “Initialization for K-means Clustering using Voronoi Diagram,” Procedia Technology, vol. 4, pp. 395–400. [14] S. S. Khan and A. Ahmad, 2013, “Cluster center initialization algorithm for K-modes clustering,” Expert Systems with Applications, vol. 40, no. 18, pp. 7444–7456. [15] S. Dhanabal and S. Chandramathi, 2013, “An Efficient k-Means Initialization Using Minimum-Average-Maximum (MAM) Method,” Asian Journal of Information Technology, vol. 12, no. 2, pp. 77–82. [16] D. Mavroeidis and E. Marchiori, May 2013, “Feature selection for k-means clustering stability: theoretical analysis and an algorithm,” Data Mining and Knowledge Discovery, vol. 28, no. 4, pp. 918–960. [17] J. Z. Huang, M. K. Ng, H. Rong, and Z. Li, 2005, “Automated Variable Weighting in k -Means Type Clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 657–668. [18] X. Chen, X. Xu, J. Z. Huang, and Y. Ye, 2013, “TW- k -Means : Automated Two-Level Variable Weighting Clustering Algorithm for Multiview Data,” IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, vol. 25, no. 4, pp. 932–944. [19] A. M. Baswade, K. D. Joshi, and P. S. Nalwade, 2012, “A Comparative Study Of K-Means And Weighted K-Means For Clustering,” International Journal of Engineering Research and Technology, vol. 1, no. 10, pp. 1–4. [20] X. C. X. Chen, W. Y. W. Yin, P. T. P. Tu, and H. Z. H. Zhang, 2009, “Weighted k-Means Algorithm Based Text Clustering,” 2009 International Symposium on Information Engineering and Electronic Commerce, pp. 51–55. [21] S. Miyamoto, M. Yamazaki, and W. Hashimoto, “Fuzzy Semi-supervised Clustering with Target Clusters Using Different Additional Terms,” in IEEE International Conference on Granular Computing, 2009, GRC ’09, 2009, vol. 1, no. 1, pp. 444–449. [22] H. Liu and X. Yu, “Application Research of k-means Clustering Algorithm in Image Retrieval System,” in Proceedings of the Second Symposium International Computer Science and Computational Technology(ISCSCT ’09), 2009, vol. 7, pp. 274–277. [23] A. Rad, B. Naderi, and M. Soltani, 2011, “Clustering and ranking university majors using data mining and AHP algorithms: A case study in Iran,” Expert Systems with Applications, vol. 38, no. 1, pp. 755–763. [24] A. H. Azdnia, P. Ghadimi, and M. M. Aghdam, “A Hybrid Model of Data Mining and MCDM Methods for Estimating Customer Lifetime Value Amir Hossein Azadnia,” in The 41st International Conference on Computers and Industrial Engineering, 2011, pp. 80–85. [25] K. Kumar and S. Kumanan, 2012, “Decision Making in Location Selection: An Integrated Approach with Clustering and TOPSIS,” The IUP Journal of Operations Management, vol. 11, no. 1, pp. 7–20. [26] B. Dong, P. Gao, H. Wang, and S. Liao, “Clustering Human Wrist Pulse Signals via Multiple Criteria Decision Making,” in 2014 IEEE 26th International Conference on Tools with Artificial Intelligence, 2014, pp. 243–250. [27] C. Bai, D. Dhavale, and J. Sarkis, 2014, “Expert Systems with Applications Integrating Fuzzy C-Means and TOPSIS for performance evaluation : An application and comparative analysis,” EXPERT SYSTEMS WITH APPLICATIONS, vol. 41, no. 9, pp. 4186–4196. [28] T. L. Saaty, 2008, “Decision making with the analytic hierarchy process,” International Journal of Services Sciences, vol. 1, no. 1, pp. 83–97.
spellingShingle Targeted ranking-based clustering using AHP K-means
summary K-Means can group similar objects features into specified number (K) of cluster centers region. Similarity is measured based on their closest distance of multiple features coordinate location. However, such distance measurement can be doubtful in satisfying certain clustering application as it does not distinguish the meaning of object features representation. Ordinal feature for example may denote to certain ranking objects rather than just number representation. Thus, clustering result should also consider the existing rank label on these objects instead of distance measurement. New AHP K-Means technique is proposed to preserve rank order for each object in the clustering result. It transforms weighted multi-features objects by aggregating them as a single ranking objects using pair-wise comparing among the objects. These ranking objects are then processed by K-Means based on cluster centers that initially setup on fair distributed ranking scale. Based on experiment using weighted course marks of 92 students, the proposed technique shows that ranking-based clustering using AHP can give accurate ranked clustering result compared to normal weighted K-Means.
title Targeted ranking-based clustering using AHP K-means
title_full Targeted ranking-based clustering using AHP K-means
title_fullStr Targeted ranking-based clustering using AHP K-means
title_full_unstemmed Targeted ranking-based clustering using AHP K-means
title_short Targeted ranking-based clustering using AHP K-means
title_sort targeted ranking-based clustering using ahp k-means