Spiking neural networks offer the potential to drastically reduce energy consumption in edge devices. Unfortunately they are overshadowed by today’s common analog neural networks, whose superior backpropagation-based learning algorithms frequently demonstrate superhuman performance on different tasks. The best accuracies in spiking networks are achieved by training analog networks and converting them. Still, during runtime many simulation time steps are needed until they converge. To improve the simulation time we evaluate two inference optimization algorithms and propose an additional method for error minimization. We assess them on Residual Networks of different sizes, up to ResNet101. The combination of all three is evaluated on a large scale with a RetinaNet on the COCO dataset. Our experiments show that all optimization algorithms combined can speed up the inference process by a factor of ten. Additionally, the accuracy loss between the original and the converted network is less than half a percent, which is the lowest on a complex dataset reported to date.
«
Spiking neural networks offer the potential to drastically reduce energy consumption in edge devices. Unfortunately they are overshadowed by today’s common analog neural networks, whose superior backpropagation-based learning algorithms frequently demonstrate superhuman performance on different tasks. The best accuracies in spiking networks are achieved by training analog networks and converting them. Still, during runtime many simulation time steps are needed until they converge. To improve the...
»