Revisiting the conclusion instability issue in software effort estimation

Michael Franklin Bosu, Solomon Mensah, Kwabena Bennin, Diab Abuaiadah

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

Conclusion instability is the absence of observing the same effect under varying experimental conditions. Deep Neural Network (DNN) and ElasticNet software effort estimation (SEE) models were applied to two SEE datasets with the view of resolving the conclusion instability issue and assessing the suitability of ElasticNet as a viable SEE benchmark model. Results were mixed as both model types attain conclusion stability for the Kitchenham dataset whilst conclusion instability existed in the Desharnais dataset. ElasticNet was outperformed by DNN and as such it is not recommended to be used as a SEE benchmark model.

Original languageEnglish
Title of host publicationProceedings - SEKE 2018
Subtitle of host publication30th International Conference on Software Engineering and Knowledge Engineering
PublisherKnowledge Systems Institute Graduate School
Pages368-371
Number of pages4
ISBN (Electronic)1891706446
DOIs
Publication statusPublished - 2018
Externally publishedYes
Event30th International Conference on Software Engineering and Knowledge Engineering, SEKE 2018 - Redwood City
Duration: 1 Jul 20183 Jul 2018

Publication series

NameProceedings of the International Conference on Software Engineering and Knowledge Engineering, SEKE
Volume2018-July
ISSN (Print)2325-9000
ISSN (Electronic)2325-9086

Conference

Conference30th International Conference on Software Engineering and Knowledge Engineering, SEKE 2018
Country/TerritoryUnited States
CityRedwood City
Period1/07/183/07/18

Keywords

  • Conclusion Instability
  • Deep Neural Network
  • ElasticNet
  • Prediction model
  • Software Effort Estimation

Cite this