Object detection in adverse weather condition for autonomous vehicles

Emmanuel Owusu Appiah, Solomon Mensah

Research output: Contribution to journalArticlepeer-review

14 Citations (Scopus)

Abstract

As self-driving or autonomous vehicles proliferate in our society, there is a need for their computing vision systems to be able to identify objects accurately, no matter the weather condition. One major concern in computer vision is improving an autonomous car’s capacity to discern between the components of its environment under challenging conditions. For instance, inclement weather like fog and rain can corrupt images which eventually affect how well autonomous vehicles navigate and localise themselves. To provide an efficient and effective approach for autonomous vehicles to accurately detect objects during adverse weather conditions. The study employed the combination of two deep learning approaches, namely YOLOv7 and ESRGAN. The use of ESRGAN is to first learn from a set of training data and adjust for the unfavourable weather conditions in the images before the YOLOv7 detector performs detection of objects. The use of the ESRGAN allowed for the adaptive enhancement of each image for improved detection performance by the YOLOv7. In both good and bad weather, the employed hybrid approach (YOLOv7 + ESRGAN) works well with about 80% accuracy in detecting all objects during adverse weather conditions. We would recommend further study on the methodology utilised in this paper to tackle the trolley-dilemma problem during inclement weather.

Original languageEnglish
Pages (from-to)28235-28261
Number of pages27
JournalMultimedia Tools and Applications
Volume83
Issue number9
DOIs
Publication statusPublished - Mar 2024

Keywords

  • Adverse weather condition
  • Autonomous vehicles
  • Deep Learning
  • Object Detection

Fingerprint

Dive into the research topics of 'Object detection in adverse weather condition for autonomous vehicles'. Together they form a unique fingerprint.

Cite this