Convolutional neural networks (CNNs) have become an established part of numerous safetycritical computer vision applications, including
human robot interactions and automated driving.
Real-world implementations will need to guarantee
their robustness against hardware soft errors corrupting the underlying platform memory. Based on
the previously observed efficacy of activation clipping techniques, we build a prototypical safety case
for classifier CNNs by demonstrating that range supervision represents a highly reliable fault detector and mitigator with respect to relevant bit flips,
adopting an eight-exponent floating point data representation. We further explore novel, non-uniform
range restriction methods that effectively suppress
the probability of silent data corruptions and uncorrectable errors. As a safety-relevant end-to-end
use case, we showcase the benefit of our approach
in a vehicle classification scenario, using ResNet50 and the traffic camera data set MIOVision. The
quantitative evidence provided in this work can be
leveraged to inspire further and possibly more complex CNN safety arguments.
«
Convolutional neural networks (CNNs) have become an established part of numerous safetycritical computer vision applications, including
human robot interactions and automated driving.
Real-world implementations will need to guarantee
their robustness against hardware soft errors corrupting the underlying platform memory. Based on
the previously observed efficacy of activation clipping techniques, we build a prototypical safety case
for classifier CNNs by demonstrating that range supervision...
»