• search hit 38 of 142
Back to Result List

Mitigating the Effects of RRAM Process Variation on the Accuracy of Artifical Neural Networks

  • Weight storage is a key challenge in the efficient implementation of artificial neural networks. Novel memory technologies such as RRAM are able to greatly improve density and introduce non-volatility and multibit capabilities to this component of ANN accelerators. The usage of RRAM in this domain comes with downsides, mainly caused by cycle-to-cycle and device-to-device variability leading to erroneous readouts, greatly affecting digital systems. ANNs have the ability to compensate for this by their inherent redundancy and usually exhibit a gradual deterioration in the accuracy of the task at hand. This means, that slight error rates can be acceptable for weight storage in an ANN accelerator. In this work we link device-to-device variability to the accuracy of an ANN for such an accelerator. From this study, we can estimate how strongly a certain net is affected by a certain device parameter variability. This methodology is then used to present three mitigation strategies and to evaluate how they affect the reaction of the network toWeight storage is a key challenge in the efficient implementation of artificial neural networks. Novel memory technologies such as RRAM are able to greatly improve density and introduce non-volatility and multibit capabilities to this component of ANN accelerators. The usage of RRAM in this domain comes with downsides, mainly caused by cycle-to-cycle and device-to-device variability leading to erroneous readouts, greatly affecting digital systems. ANNs have the ability to compensate for this by their inherent redundancy and usually exhibit a gradual deterioration in the accuracy of the task at hand. This means, that slight error rates can be acceptable for weight storage in an ANN accelerator. In this work we link device-to-device variability to the accuracy of an ANN for such an accelerator. From this study, we can estimate how strongly a certain net is affected by a certain device parameter variability. This methodology is then used to present three mitigation strategies and to evaluate how they affect the reaction of the network to variability: a) Dropout Layers b) Fault-Aware Training c) Redundancy. These mitigations are then evaluated by their ability to improve accuracy and to lower hardware overhead by providing data for a real-word example. We improved this network’s resilience in such a way that it could tolerate double the variation in one of the device parameters (standard deviation of the oxide thickness can be 0.4 nm instead of 0.2 nm while maintaining sufficient accuracy.)show moreshow less

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author: Markus Fritscher, Johannes Knödtel, Maen Mallah, Stefan Pechmann, Emilio Perez-Bosch Quesada, Tommaso Rizzi, Christian WengerORCiD, Marc ReichenbachORCiD
DOI:https://doi.org/10.1007/978-3-031-04580-6_27
ISBN:978-3-031-04579-0
ISBN:978-3-031-04580-6
ISSN:0302-9743
ISSN:1611-3349
Title of the source (English):Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2021. Lecture Notes in Computer Science
Publisher:Springer
Document Type:Conference publication peer-reviewed
Language:English
Year of publication:2022
Tag:RRAM; memristive device; neural network
First Page:401
Last Page:417
Series ; volume number:Lecture Notes in Computer Science ; volume 13227
Faculty/Chair:Fakultät 1 MINT - Mathematik, Informatik, Physik, Elektro- und Informationstechnik / FG Halbleitermaterialien
Einverstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.