TY - GEN
T1 - The Significant Effects of Data Sampling Approaches on Software Defect Prioritization and Classification
AU - Bennin, Kwabena Ebo
AU - Keung, Jacky
AU - Monden, Akito
AU - Phannachitta, Passakorn
AU - Mensah, Solomon
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/7
Y1 - 2017/12/7
N2 - Context: Recent studies have shown that performance of defect prediction models can be affected when data sampling approaches are applied to imbalanced training data for building defect prediction models. However, the magnitude (degree and power) of the effect of these sampling methods on the classification and prioritization performances of defect prediction models is still unknown. Goal: To investigate the statistical and practical significance of using resampled data for constructing defect prediction models. Method: We examine the practical effects of six data sampling methods on performances of five defect prediction models. The prediction performances of the models trained on default datasets (no sampling method) are compared with that of the models trained on resampled datasets (application of sampling methods). To decide whether the performance changes are significant or not, robust statistical tests are performed and effect sizes computed. Twenty releases of ten open source projects extracted from the PROMISE repository are considered and evaluated using the AUC, pd, pf and G-mean performance measures. Results: There are statistical significant differences and practical effects on the classification performance (pd, pf and G-mean) between models trained on resampled datasets and those trained on the default datasets. However, sampling methods have no statistical and practical effects on defect prioritization performance (AUC) with small or no effect values obtained from the models trained on the resampled datasets. Conclusions: Existing sampling methods can properly set the threshold between buggy and clean samples, while they cannot improve the prediction of defect-proneness itself. Sampling methods are highly recommended for defect classification purposes when all faulty modules are to be considered for testing.
AB - Context: Recent studies have shown that performance of defect prediction models can be affected when data sampling approaches are applied to imbalanced training data for building defect prediction models. However, the magnitude (degree and power) of the effect of these sampling methods on the classification and prioritization performances of defect prediction models is still unknown. Goal: To investigate the statistical and practical significance of using resampled data for constructing defect prediction models. Method: We examine the practical effects of six data sampling methods on performances of five defect prediction models. The prediction performances of the models trained on default datasets (no sampling method) are compared with that of the models trained on resampled datasets (application of sampling methods). To decide whether the performance changes are significant or not, robust statistical tests are performed and effect sizes computed. Twenty releases of ten open source projects extracted from the PROMISE repository are considered and evaluated using the AUC, pd, pf and G-mean performance measures. Results: There are statistical significant differences and practical effects on the classification performance (pd, pf and G-mean) between models trained on resampled datasets and those trained on the default datasets. However, sampling methods have no statistical and practical effects on defect prioritization performance (AUC) with small or no effect values obtained from the models trained on the resampled datasets. Conclusions: Existing sampling methods can properly set the threshold between buggy and clean samples, while they cannot improve the prediction of defect-proneness itself. Sampling methods are highly recommended for defect classification purposes when all faulty modules are to be considered for testing.
KW - Defect prediction
KW - Empirical software engineering
KW - Imbalanced data
KW - Sampling methods
KW - Statistical significance
UR - http://www.scopus.com/inward/record.url?scp=85042378748&partnerID=8YFLogxK
U2 - 10.1109/ESEM.2017.50
DO - 10.1109/ESEM.2017.50
M3 - Conference contribution
AN - SCOPUS:85042378748
T3 - International Symposium on Empirical Software Engineering and Measurement
SP - 364
EP - 373
BT - Proceedings - 11th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2017
PB - IEEE Computer Society
T2 - 11th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2017
Y2 - 9 November 2017 through 10 November 2017
ER -