The formal verification of neural networks is essential for their
application in safety-critical environments. However, the set-based
verification of neural networks using linear approximations often
obtains overly conservative results, while nonlinear approximations
quickly become computationally infeasible in deep neural networks.
We address this issue for the first time by automatically balancing between precision and computation time without splitting the
propagated set. Our work introduces a novel automatic abstraction
refinement approach using sensitivity analysis to iteratively reduce
the abstraction error at the neuron level until either the specifications are met or a maximum number of iterations is reached. Our
evaluation shows that we can tightly over-approximate the output
sets of deep neural networks and that our approach is up to a thousand times faster than a naive approach. We further demonstrate
the applicability of our approach in closed-loop settings.
«
The formal verification of neural networks is essential for their
application in safety-critical environments. However, the set-based
verification of neural networks using linear approximations often
obtains overly conservative results, while nonlinear approximations
quickly become computationally infeasible in deep neural networks.
We address this issue for the first time by automatically balancing between precision and computation time without splitting the
propagated set. Our work intro...
»