Enhancing Perception Systems for Autonomous Vehicles Using Radar-Generated Occupancy Grid Maps
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Université d'Ottawa | University of Ottawa
Abstract
The continuous development in the field of Intelligent Transportation Systems (ITS) and Autonomous Vehicles (AV) promises to revolutionize transportation and society by enhancing road safety, traffic management, and environmental sustainability. Central to the realization of AVs are Perception systems that enable the vehicle to interact with and understand the surroundings. These systems utilize various sensors for measuring and recording data which are then processed by complex algorithms to derive meaningful information. With the emergence of Deep Learning (DL) and comprehensive driving scenario datasets, state-of-the-art methods have significantly matured, especially in the object detection field. These models mostly rely on cameras and LiDAR sensors for perception purposes. Despite their superior performance under favorable stations, these models often degrade in performance under real-world scenarios such as inadequate illumination or different weather conditions. These challenges hinder the reliability of AVs and raise concerns for safe navigation.
To tackle these challenges, various solutions have been proposed by the community of computer vision, however, camera and LiDAR sensors have inherent limitations against environmental challenges, in particular, which prevent them from performing reliably. On the other hand, Radar sensors have been gaining more popularity and recent studies have shown their potential in tackling these challenges. Despite their advantages, these sensors are still under-explored compared to the cameras and LiDARs, and the inadequate number of available Radar-inclusive datasets and sparse Radar recordings make it more difficult to develop Radar-aided perception models.
In our research, we propose a unified framework that aims to enhance the quality of Radar data frames by transforming the sparse point clouds into high-quality and high-resolution Occupancy Grid Maps (OGM). Our system utilizes Bayesian filtering and a Radar-specific sensor inverse modeling to adaptively estimate the occupancy state of different locations, represented by cells, across the mapping environment. The final product is the construction of a rich representation of the environment preserving structural details, compared to the indistinguishable structure of Radar point clouds. Our system reduces the extreme sparsity in Radar point clouds by more than 60% and eliminates the uncertainty in the whereabouts of obstacles by more than 25%. Furthermore, the experiments show the effectiveness of the OGMs in robustifying the performance of a baseline object detection model against foggy weather conditions.
By addressing the quality-related issue for the Radar portion of currently available datasets, we served to facilitate the study of the impact and benefits of this sensing modality. Utilizing the unique characteristics of these sensors and combining them with cameras and LiDARs through a more complex fusion algorithm can lead to a more robust and reliable perception system, taking us one step closer to the realization of Avs.
Description
Keywords
Adverse Weather Conditions, Autonomous Vehicles, Object Detection, Occupancy Grid Mapping, Perception System, Inverse Sensor Modelling, Radar, Sensors
