Machine Learning based Test Case Prioritization in Object Oriented Testing
Ajmer Singh1, Rajesh Kumar Bhatia2, Anita Singhrova3

1Ajmer Singh*, CSE Department, DCRUST Murthal, Sonipat, India.
2Rajesh Kumar Bhatia, CSE Department, PEC University , Chandigarh, India.
3Anita Singhrova CSE Department, DCRUST Murthal, Sonipat, India. 

Manuscript received on 15 August 2019. | Revised Manuscript received on 25 August 2019. | Manuscript published on 30 September 2019. | PP: 700-707 | Volume-8 Issue-3 September 2019 | Retrieval Number: C3968098319/19©BEIESP | DOI: 10.35940/ijrte.C3968.098319
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Software maintenance is one of the most expensive activities in software life cycle. It costs nearly 70% of the total cost of the software. Either to adopt the new requirement or to correct the functionality, software undergoes maintenance. As a consequent of maintenance activities, software undergoes many reforms. Newly added software components may affect the working of existing components and also may introduce faults in existing components. The regression testing tries to reveal the faults that might have been introduced due to these reformations. Running all the prior existing test cases may not be feasible due to constraints like time, cost and resources. Test case prioritization may help in ordered execution of test cases. Running a faulty or fault prone component early in testing process may help in revealing more faults per unit of time. And hence may reduce the testing time. There have been many different criteria for assigning the priority to test cases. But none of the approaches so far have considered the object oriented design metrics for determining the priority of test cases. Object oriented design metrics have been empirically studied for their impact of software maintainability, reliability, testability and quality but usage of these metrics in test case prioritization is still an open area of research. The research reported in this paper evaluates subset of CK metrics. Metrics considered from CK suite include Coupling between objects (CBO), Depth of Inheritance tree (DIT), weighted methods per class (WMC), Number of children (NOC), and Response for a class (RFC). Study also considers four other metrics namely publically inherited methods (PIM), weighted attributes per class (WAC), number of methods inherited (NMI) and number of methods overridden. A model is built based on these metrics for the prediction of software quality and based on the quality measures software modules are classified with the help of Support Vector Machine (SVM) algorithm. The proposed approach is implemented in WEKA tool and analysed on experimental data extracted from open source software. Proposed work would firstly help the tester in identifying the low quality modules and then prioritize the test cases based on quality centric approach. The work also attempts to automate test case prioritization in object oriented testing. The results obtained are encouraging.
Keywords: Object oriented Software Testing, Machine Learning, Test case Prioritization, Object oriented metrics. Regression testing.

Scope of the Article: Machine Learning