##plugins.themes.bootstrap3.article.main##

Toan-Khoa Nguyen Lien T. Vu Viet Q. Vu Tien-Dat Hoang Shu-Hao Liang Minh-Quang Tran

Abstract

Object detection technology is an essential aspect of the development of autonomous vehicles. The crucial first step of any autonomous driving system is to understand the surrounding environment. In this study, we present an analysis of object detection models on the Duckietown robot based on You Only Look Once version 5 (YOLOv5) architectures. YOLO model is commonly used for neural network training to enhance the performance of object detection models. In a case study of Duckietown, the duckies and cones present hazardous obstacles that vehicles must not drive into. This study implements the popular autonomous vehicles learning platform, Duckietown's data architecture and classification dataset, to analyze object detection models using different YOLOv5 architectures. Moreover, the performances of different optimizers are also evaluated and optimized for object detection. The experiment results show that the pre-trained of large size of YOLOv5 model using the Stochastic Gradient Decent (SGD) performs the best accuracy, in which a mean average precision (mAP) reaches 97.78%. The testing results can provide objective modeling references for relevant object detection studies.

Download Statistics

##plugins.themes.bootstrap3.article.details##

Keywords

Object detection, Duckietown robot, YOLOv5 architectures, optimization functions

References
Citation Format
Section
Articles