Neural network (NN) execution on resource-constrained edge devices is increasing. Commonly, hardware accelerators are introduced in small devices to support the execution of NNs. However, an attacker can often gain physical access to edge devices. Therefore, side-channel attacks
are a potential threat to obtain valuable information about the NN. In order to keep the network secret and protect it from extraction, countermeasures are required. In this article, we propose a masked hardware accelerator for feed-forward NNs that utilizes fixed-point arithmetic and is protected against side-channel analysis (SCA). We use an existing arithmetic
masking scheme and improve it to prevent incorrect results.
Moreover, we transfer the scheme to the hardware layer by
utilizing the glitch-extended probing model and demonstrate the
security of the individual modules. To exhibit the effectiveness of
the masked design, we implement it on an FPGA and measure
the power consumption. The results show that with two million
measurements, no secret information is leaked by means of
a t-test. In addition, we compare our accelerator with the
masked software implementation and other hardware designs.
The comparison indicates that our accelerator is up to 38 times
faster than software and improves the throughput by a factor
of about 4.1 compared to other masked hardware accelerators.
«
Neural network (NN) execution on resource-constrained edge devices is increasing. Commonly, hardware accelerators are introduced in small devices to support the execution of NNs. However, an attacker can often gain physical access to edge devices. Therefore, side-channel attacks
are a potential threat to obtain valuable information about the NN. In order to keep the network secret and protect it from extraction, countermeasures are required. In this article, we propose a masked hardware acceler...
»