Defect Prediction and Dimension Reduction Methods for Achieving Better Software Quality
Dhavakumar.P1, Gopalan. N. P2 

1Dhavakumar.P, Department of Computer Science and Engineering, Periyar Maniammai University, Vallam, Thanjavur.
2Gopalan. N. P, Department of Computer Application, National Institute of Technology, Tiruchirappalli.

Manuscript received on 21 March 2019 | Revised Manuscript received on 26 March 2019 | Manuscript published on 30 July 2019 | PP: 2168-2179 | Volume-8 Issue-2, July 2019 | Retrieval Number: B2703078219/19©BEIESP | DOI: 10.35940/ijrteB2703.078219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (

Abstract: In quality of software, a fault discovery course is anticipated; intended to recover the taking up of various methods using cluster classifiers. Initially the classifiers are qualified on software record and then utilized to forecast if a forthcoming transformation originates a defect. Shortcomings of previous classifier based error prediction methods are inadequate presentation for realistic utilization and slow-moving forecast times due to a huge number of learned machine characteristics. Feature selection is a procedure in choosing a subset of pertinent characteristics so that the eminence of forecast replica can be enhanced. So that prediction recital of grouping techniques will be enhanced or sustained, whereas learning instance is considerably abridged. This effort commences by presenting a general idea of the datasets for error prediction, and then features a novel procedure for feature assortment by means of wrapper methods namely Fuzzy Neural Network (FNN) and Kernel Based Support Vector Machine (KSVM). The features chosen from FNN and KSVM are measured as significant characters. This effort examines numerous feature selection wrapper methods that are normally appropriate to grouping based error prediction. The system castoffs not as much of significant characters until optimal grouping recital are attained. The whole number of characters utilized for guidance is considerably reduced, frequently to lower than 15% of the unique. The general performance metrics is make used to estimate grouping systems such as accurateness, Recall, Precision, and F-Measure. It demonstrates that the anticipated Hybrid Hierarchical K-Centers (HHKC) grouping executes enhanced software quality compared to conventional grouping methods.
Index Terms: Defect Prediction and Prevention, Static Code Features, Feature Selection, Kernel Based Support Vector Machine (KSVM), Hybrid Hierarchical K-Centers (HHKC).

Scope of the Article: Machine Design