We investigate the distributional shifts between datasets which pose a challenge to validate safety critical driving functions which incorporate Machine Learning (ML)-based algorithms. First, we describe the possible distributional shifts which can occur in highway driving datasets. Following this, we analyze—both qualitatively and quantitatively—the distributional shifts between two publicly available, and widely used, highway driving datasets. We demonstrate that a safety critical driving function, e.g., a lane change maneuver prediction, trained on one dataset will not generalize as expected to the other dataset in the presence of these distributional shifts. This highlights the impact which distributional shifts can have on safety critical driving functions. We suggest that an analysis of the datasets used to train ML-based algorithms incorporated in safety critical driving functions plays an important role in building a safety-argument for validation.
«
We investigate the distributional shifts between datasets which pose a challenge to validate safety critical driving functions which incorporate Machine Learning (ML)-based algorithms. First, we describe the possible distributional shifts which can occur in highway driving datasets. Following this, we analyze—both qualitatively and quantitatively—the distributional shifts between two publicly available, and widely used, highway driving datasets. We demonstrate that a safety critical driving func...
»