Zhang, Biwei2024-07-262024-07-262024-07-26http://hdl.handle.net/10393/46424https://doi.org/10.20381/ruor-30457Recognizing and categorizing objects in adverse weather conditions presents significant challenges for autonomous vehicles. To enhance the robustness of object detection systems, this thesis introduces an innovative approach that leverages sensors and deep learning-based solutions for object detection at various levels within a traffic circle. The proposed approach improves the effectiveness of single-stage object detectors, aiming to advance performance in autonomous racing environments by reducing instances of false detection and increasing recognition rates. The enhanced framework is based on the one-stage object detection model and incorporates multiple lightweight backbones. Additionally, attention mechanisms are integrated to further refine the object detection process. Our proposed model demonstrates superior performance compared to state-of-the-art methods on the DAWN dataset, achieving a mean average precision (mAP) of 99.1%, surpassing the previous result of 84.7%. Object detection is one of the most fundamental challenges in computer vision. Over the past decade, the rapid evolution of deep learning has led researchers to extensively experiment with and enhance the performance of object detection, as well as related tasks such as object classification, localization, and segmentation using deep models. Despite these advancements, identifying and classifying objects in challenging weather conditions remains a significant difficulty for autonomous vehicles. To improve the robustness of object detection systems, in this thesis we introduce an innovative approach for detecting objects at different levels by leveraging sensors and deep learning-based solutions within a traffic circle. We proposed an enhanced framework for the one-stage object detection model, incorporating multiple lightweight backbones such as ShuffleNetV2, GhostNet, MobileNetv3-Small, and VoVNet. We also integrated computer vision attention mechanisms, including SE Block, CBAM Block, and ECA Block. Additionally, we present a comparative analysis of the proposed single-stage object detectors, various YOLOv5 variants. Our results achieved a 99.1% Intersection over Union (IOU) mAP with an IOU threshold of 0.5, exceeding the state-of-the-art results of 94.7%. The suggested approach enhances the effectiveness of single-stage object detectors, aiming to improve performance in autonomous racing environments by reducing false detection and increasing recognition accuracy. As a result, this research advances the efficiency of single-stage object detectors, significantly boosting their performance in autonomous racing scenarios under adverse weather conditions. This thesis presents significant enhancements to the performance of one-stage object detectors through the application of pruning and quantization methods. These improvements are designed to ensure safe operations, support global traffic management, and enhance the mechanism and efficiency of practical 2D object detection under compromised visibility conditions.enObject DetectionBounding BoxesAdverse WeatherSelf-driving CarsNeural Network ArchitectureSingle Shot Detection (SSD)LiDAR TechnologySensor FusionMulti-Object DetectionCNN (Convolutional Neural Network)Image SegmentationDetection AccuracyReal-Time ProcessingEnhanced Safety of Autonomous Driving in Real-World Adverse Weather Conditions via Deep Learning-Based Object DetectionThesis