Safer Learning-enabled Autonomous Systems
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Université d'Ottawa | University of Ottawa
Abstract
The safety of learning-enabled autonomous systems is critical for their reliable operation in real-world applications. However, ensuring the safety of learned components in these systems is highly challenging due to the complexity of their behaviors and interactions with the system’s operational contexts. This thesis presents two contributions addressing key challenges in learning-enabled autonomous system safety.
First, we propose MLCSHE (ML Component Safety Hazard Envelope), a novel method to identify the hazard boundary of learned components in a learning-enabled autonomous system. MLCSHE uses a Cooperative Co-Evolutionary Algorithm (CCEA) to decompose the high-dimensional problem space into manageable subproblems, efficiently exploring safe and unsafe regions. Additionally, we introduce a probabilistic view of the hazard boundary, incorporating a custom fitness function that drives the search. Empirical evaluation on an Autonomous Vehicle case study shows that MLCSHE outperforms traditional methods like genetic algorithms in both effectiveness and efficiency.
Second, we address the challenge of developing real-time safety monitors for learned components. We propose a method based on Deep Learning (DL)-based probabilistic time series forecasting to predict safety violations given the operational context. Through empirical evaluation on autonomous aviation and autonomous driving case studies, we assess the accuracy, latency, and resource usage of several forecasting models. Among them, the Temporal Fusion Transformer (TFT) demonstrates superior performance in predicting imminent safety violations while maintaining acceptable computational overhead.
Together, these contributions advance the state of the art in safety assurance of learning-enabled autonomous systems, facilitating their safer deployment in complex environments.
Finally, the thesis discusses the larger practical and policy implications of the proposed methods and provides future directions for research.
Description
Keywords
AI System Safety, Cooperative Co-Evolution, Safety Monitoring, Learning-enabled Autonomous System
