TY - JOUR
T1 - Improving software vulnerability classification performance using normalized difference measures
AU - Kudjo, Patrick Kwaku
AU - Brown, Selasie Aformaley
AU - Mensah, Solomon
N1 - Publisher Copyright:
© 2023, The Author(s) under exclusive licence to The Society for Reliability Engineering, Quality and Operations Management (SREQOM), India and The Division of Operation and Maintenance, Lulea University of Technology, Sweden.
PY - 2023/6
Y1 - 2023/6
N2 - Vulnerability Classification Models (VCMs) play a crucial role in software reliability engineering and hence, have attracted significant studies from researchers and practitioners. Recently, machine learning and data mining techniques have emerged as important paradigms for vulnerability classification. However, there are some major drawbacks of existing vulnerability classification models, which include difficulties in curating real vulnerability reports and their associated code fixes from large software repositories. Additionally, different types of features such as the traditional software metrics and text mining features that are extracted from term vectors are used to build vulnerability classification models, which often results in the curse of dimensionality. This significantly impacts the time required for classification and the prediction accuracy of existing vulnerability classification models. To address these deficiencies, this study presents a vulnerability classification framework using the term frequency-inverse document frequency (TF-IDF), and the normalized difference measure. In the proposed framework, the TF-IDF model is first used to compute the frequency and weight of each word from the textual description of vulnerability reports. The normalized difference measure is then employed to select an optimal subset of feature words or terms for the machine learning algorithms. The proposed approach was validated using three vulnerable software applications containing a total number of 3949 real vulnerabilities and five machine learning algorithms, namely Naïve Bayes, Naïve Bayes Multinomial, Support Vector Machines, K-Nearest Neighbor, and Decision Tree. Standard classification evaluation metrics such as precision, recall, F-measure, and accuracy were applied to assess the performance of the models and the results were validated using Welch t-test, and Cliff’s delta effect size. The outcome of this study demonstrates that normalized difference measure and k-nearest neighbor significantly improves the accuracy of vulnerability report classification.
AB - Vulnerability Classification Models (VCMs) play a crucial role in software reliability engineering and hence, have attracted significant studies from researchers and practitioners. Recently, machine learning and data mining techniques have emerged as important paradigms for vulnerability classification. However, there are some major drawbacks of existing vulnerability classification models, which include difficulties in curating real vulnerability reports and their associated code fixes from large software repositories. Additionally, different types of features such as the traditional software metrics and text mining features that are extracted from term vectors are used to build vulnerability classification models, which often results in the curse of dimensionality. This significantly impacts the time required for classification and the prediction accuracy of existing vulnerability classification models. To address these deficiencies, this study presents a vulnerability classification framework using the term frequency-inverse document frequency (TF-IDF), and the normalized difference measure. In the proposed framework, the TF-IDF model is first used to compute the frequency and weight of each word from the textual description of vulnerability reports. The normalized difference measure is then employed to select an optimal subset of feature words or terms for the machine learning algorithms. The proposed approach was validated using three vulnerable software applications containing a total number of 3949 real vulnerabilities and five machine learning algorithms, namely Naïve Bayes, Naïve Bayes Multinomial, Support Vector Machines, K-Nearest Neighbor, and Decision Tree. Standard classification evaluation metrics such as precision, recall, F-measure, and accuracy were applied to assess the performance of the models and the results were validated using Welch t-test, and Cliff’s delta effect size. The outcome of this study demonstrates that normalized difference measure and k-nearest neighbor significantly improves the accuracy of vulnerability report classification.
KW - Feature selection
KW - Normalized difference measure
KW - Severity
KW - Software vulnerability
UR - http://www.scopus.com/inward/record.url?scp=85152536647&partnerID=8YFLogxK
U2 - 10.1007/s13198-023-01911-6
DO - 10.1007/s13198-023-01911-6
M3 - Article
AN - SCOPUS:85152536647
SN - 0975-6809
VL - 14
SP - 1010
EP - 1027
JO - International Journal of System Assurance Engineering and Management
JF - International Journal of System Assurance Engineering and Management
IS - 3
ER -