Improving Decision Tree Forest using Preprocessed Data
Archana R. Panhalkar1, Dharmpal D. Doye2

1Archana R. Panhalkar*, Department of Computer Science and Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nande, India.
2Dharmpal D. Doye, Department of Electronics and Telecommunication, Shri Guru Gobind Singhji Institute of Engineering and, Technology, Nanded, India.
Manuscript received on March 15, 2020. | Revised Manuscript received on March 24, 2020. | Manuscript published on March 30, 2020. | PP: 4457-4460 | Volume-8 Issue-6, March 2020. | Retrieval Number: F8136038620/2020©BEIESP | DOI: 10.35940/ijrte.F8136.038620

Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (

Abstract: Random forest is one of the best techniques in data mining for classification. It not only improves accuracy of classification but performing best for various data types. Data mining researchers concentrated on improving random tree forest by constructing trees by using various methods. In this paper, we are improving decision forest by applying various preprocessing techniques. Decision tree forest is created by using bootstrapped samples. Trees created using preprocessed data improves not only accuracy of classification but also improves time required to construct forest. Experiments are carried out on various UCI data sets to show better performance of our proposed system.
Keywords: Random Forest, Decision Tree, C4.5, CART, Forest PA.
Scope of the Article: Decision making.