Improving the Robustness of Neural Networks to Noisy Multi-Level Non-Volatile Memory-based Synapses
Résumé
The implementation of Artificial Neural Networks
(ANNs) using analog Non-Volatile Memories (NVMs) for synaptic
weights storage promises improved energy-efficiency and higher
density compared to fully-digital implementations. However,
NVMs are prone to variability, resulting in a degradation of the
accuracy of ANNs. In this paper, a general methodology to evaluate and enhance the accuracy of neural networks implemented
with non-ideal multi-level NVMs is presented. A hardware fault
model distinguishing two types of errors, namely static and
dynamic, capturing the variability of NVMs is proposed. Considering various neural networks, it is shown that error-aware
training highly increases the robustness to errors compared to
a standard, error-agnostic, training. Moreover, Recurrent NNs
(RNNs) and Spiking NNs (SNNs) are found to be inherently
more robust to dynamic errors than Convolutional NNs (CNNs).
In addition, new insights on the adaptability of neural networks
to noisy multi-level NVMs are presented, which could further
improve their robustness in this context. The methodology aims
at providing tools for hardware-software co-design, paving the
way for a broader use of multi-level NVM-based synapses.
Origine | Fichiers produits par l'(les) auteur(s) |
---|