In this paper, we address the challenge of employing Machine Learning (ML) algorithms in safety critical driving functions. Despite ML algorithms demonstrating good performance in various driving tasks, e.g., detecting when other vehicles are going to change lanes, the challenge of validating these methods has been neglected. To this end, we introduce an interpretable Lane Change Detector (LCD) algorithm which takes advantage of the performance of modern ML-based anomaly detection methods. We independently train three Deep Autoencoders (DAEs) on different driving maneuvers: lane keeping, right lane changes, and left lane changes. The lane changes are subsequently detected by observing the reconstruction errors at the output of each DAE. Since the detection is purely based on the reconstruction errors of independently trained DAEs, we show that the classification outputs are completely interpretable. We compare the introduced algorithm with black-box Recurrent Neural Network (RNN)-based classifiers, and train all methods on realistic highway driving data. We discuss both the costs and the benefits of an interpretable classification, and demonstrate the inherent interpretability of the algorithm.
«
In this paper, we address the challenge of employing Machine Learning (ML) algorithms in safety critical driving functions. Despite ML algorithms demonstrating good performance in various driving tasks, e.g., detecting when other vehicles are going to change lanes, the challenge of validating these methods has been neglected. To this end, we introduce an interpretable Lane Change Detector (LCD) algorithm which takes advantage of the performance of modern ML-based anomaly detection methods. We in...
»