Conditional max-preserving normalization: an innovative approach to combining diverse classification models

Ensemble learning is a widely recognized technique in Artificial Intelligence that boosts model performance by combining predictions from multiple classifiers. While traditional ensemble methods effectively combine classifiers within the same domain, they face challenges when integrating models that...

Full description

Bibliographic Details
Main Authors: Najafabadi, Amin Arab, Nejati, Faranak, Yap, Ng Keng, Md. Sultan, Abu Bakar, Ali, Mohamed Abdullahi, Ashani, Zahra Nazemi
Format: Article
Language:English
Published: Indonesian Society for Knowledge and Human Development 2024
Online Access:http://psasir.upm.edu.my/id/eprint/118444/
http://psasir.upm.edu.my/id/eprint/118444/1/118444.pdf
Description
Summary:Ensemble learning is a widely recognized technique in Artificial Intelligence that boosts model performance by combining predictions from multiple classifiers. While traditional ensemble methods effectively combine classifiers within the same domain, they face challenges when integrating models that handle different tasks. This study introduces Conditional Max-Preserving Normalization, a novel approach that extends ensemble methods’ applicability across diverse classification domains. Unlike altering deep learning architectures, this method focuses on preserving the most significant prediction while proportionally scaling others to ensure consistency in the combined output. The study utilized the SoftMax function to emulate classification tasks, generating probability vectors for both Human-Car and Cat-Dog classifications. The proposed method identifies the highest confidence value in the combined vector, counts its occurrences, sums the remaining values, and computes a Scale Rate to normalize the vector. The competitive evaluation demonstrated that Conditional Max-Preserving Normalization outperforms traditional ensemble methods in maintaining accuracy and reliability across diverse classification tasks. Formal verification using the Z3 solver affirmed the method's robustness, confirming that the combined vector maintains a valid probability distribution and retains the maximum value. Future research could focus on refining the method to eliminate conditions during normalization, adapting it for binary classification, exploring its application in sequential classification tasks, and extending its use to regression problems. This research lays the groundwork for more robust and adaptable ensemble learning models with potential applications in various real-world scenarios.