Measuring Success of Heterogeneous Ensemble Filter Feature Selection Models
Noureldien A. Noureldien1, Einas A. Mohammed2

1Noureldien A. Noureldien, Faculty, Computer science and information technology, University of Science and Technology, Omdurman, Sudan.
2Einas A. Mohammed, Master degree in Computer science at University of Science and Technology, Omdurman, Sudan.
Manuscript received on February 10, 2020. | Revised Manuscript received on February 20, 2020. | Manuscript published on March 30, 2020. | PP: 1153-1158 | Volume-8 Issue-6, March 2020. | Retrieval Number: E4993018520/2020©BEIESP | DOI: 10.35940/ijrte.E4993.038620

Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (

Abstract: One problem in utilizing ensemble feature selection models is machine learning is the fact that there is no guarantee that an ensemble model will improve machine learning classification performance. This implies that different ensemble models have different success probability, i.e. have different probability in improving the performance of machine learning. This paper introduces the concept of success probability for heterogeneous ensemble models and stated the definitions, notations, and algorithms necessary to the mathematical formulation and computation of the success probability. To show how the theory applied, we create an ensemble filter feature selection model that uses four filter feature selection algorithms (Correlation, Gain Ratio, Info Gain, and One R) as base filters and the Max as a combination method. The experimental results showed that the success probability of the developed ensemble filter model using a set of 9 machine learning algorithms is found to be 0.58.
Keywords: Filter Feature Selection, Ensemble Feature Selection Model, Combination Method, Success Probability, Classification Accuracy, Measuring Success Probability.
Scope of the Article: Classification.