Improving software vulnerability classification performance using normalized difference measures

Patrick Kwaku Kudjo, Selasie Aformaley Brown, Solomon Mensah

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Vulnerability Classification Models (VCMs) play a crucial role in software reliability engineering and hence, have attracted significant studies from researchers and practitioners. Recently, machine learning and data mining techniques have emerged as important paradigms for vulnerability classification. However, there are some major drawbacks of existing vulnerability classification models, which include difficulties in curating real vulnerability reports and their associated code fixes from large software repositories. Additionally, different types of features such as the traditional software metrics and text mining features that are extracted from term vectors are used to build vulnerability classification models, which often results in the curse of dimensionality. This significantly impacts the time required for classification and the prediction accuracy of existing vulnerability classification models. To address these deficiencies, this study presents a vulnerability classification framework using the term frequency-inverse document frequency (TF-IDF), and the normalized difference measure. In the proposed framework, the TF-IDF model is first used to compute the frequency and weight of each word from the textual description of vulnerability reports. The normalized difference measure is then employed to select an optimal subset of feature words or terms for the machine learning algorithms. The proposed approach was validated using three vulnerable software applications containing a total number of 3949 real vulnerabilities and five machine learning algorithms, namely Naïve Bayes, Naïve Bayes Multinomial, Support Vector Machines, K-Nearest Neighbor, and Decision Tree. Standard classification evaluation metrics such as precision, recall, F-measure, and accuracy were applied to assess the performance of the models and the results were validated using Welch t-test, and Cliff’s delta effect size. The outcome of this study demonstrates that normalized difference measure and k-nearest neighbor significantly improves the accuracy of vulnerability report classification.

Original languageEnglish
Pages (from-to)1010-1027
Number of pages18
JournalInternational Journal of System Assurance Engineering and Management
Volume14
Issue number3
DOIs
Publication statusPublished - Jun 2023

Keywords

  • Feature selection
  • Normalized difference measure
  • Severity
  • Software vulnerability

Fingerprint

Dive into the research topics of 'Improving software vulnerability classification performance using normalized difference measures'. Together they form a unique fingerprint.

Cite this