Spiking neural networks offer the potential for ultra-low power computations on specialized hardware, as they only consume energy when information is being transmitted. High energy efficiency is ensured when sparse temporal codes are used in which every neuron emits at most one single action potential. We present a mathematically lossless approach to convert a conventionally trained neural network to a temporal-coded spiking network. By analyzing the characteristics of spiking neurons, we eliminate the unfavorable properties for temporal coding and show the necessity of a global reference. Using neural oscillations for referencing, the resulting network shows no loss compared to the original network. Our approach makes it possible to use conventionally trained deep neural networks at the same accuracy but with the smallest necessary amount of energy.
«
Spiking neural networks offer the potential for ultra-low power computations on specialized hardware, as they only consume energy when information is being transmitted. High energy efficiency is ensured when sparse temporal codes are used in which every neuron emits at most one single action potential. We present a mathematically lossless approach to convert a conventionally trained neural network to a temporal-coded spiking network. By analyzing the characteristics of spiking neurons, we elimin...
»