Refine
Year of publication
Document Type
- Article (10)
- Conference Proceeding (3)
- Report (2)
Language
- English (15)
Has Fulltext
- yes (15)
Keywords
- memristor (2)
- non-volatile memory (2)
- ternary arithmetic (2)
- - (1)
- ARM (1)
- Arithmetic and logic units (1)
- CUDA (1)
- CUDA <Informatik> (1)
- Contrast Method (1)
- Disjunctive Form (1)
Institute
Modeling of resistive RAMs (RRAMs) is a herculean task due to its non-linearity. While the exigent need for a model has motivated research groups to formulate realistic models, the diversity in RRAMs’ characteristics has created a gap between model developers and model users. This paper bridges the gap by proposing an algorithm by which the parameters of a model are tuned to specific RRAMs. To this end, a physics-based compact model was chosen due to its flexibility, and the proposed algorithm was used to exactly fit the model to different RRAMs, which differed greatly in their material composition and switching behavior. Furthermore, the model was extended to simulate multiple low resistance states (LRS), which is a vital focus of research to increase memory density in RRAMs. The ability of the model to simulate the switching from a high resistance state to multiple LRS was verified by measurements on 1T-1R cells.
The quest to increase memory density in Resistive
Random Access Memory (RRAM) has motivated researchers to
store more bits/cell by implementing Multi-Level Cell (MLC)
or multi-bit RRAM. Implementing multiple states narrows the
distance between states, making sensing of MLC RRAM a
challenging task. In this paper, we present a circuit which
senses the state of a MLC by converting the current drawn
from the cell to voltage pulses, where the number of pulses is
proportional to the current’s magnitude. The circuit distinguishes
between the states by the relative current’s magnitude and hence
does not require an absolute reference. Simulations in IHP’s
130 nm CMOS technology confirmed fast (single step) sensing
while tolerating appropriate variations in the sensed resistance.
The proposed circuit is also area efficient when compared to
conventional parallel sensing approach.
Incorporating Variability of Resistive RAM in Circuit Simulations using the Stanford–PKU Model
(2020)
Intrinsic variability observed in resistive-switching
devices (cycle-to-cycle and device-to-device) is widely recognised
as a major hurdle for widespread adoption of Resistive RAM
technology. While physics-based models have been developed to
accurately reproduce the resistive-switching behaviour, reproducing
the observed variability behavior of a specific RRAM
has not been studied. Without a properly fitted variability in
the model, the simulation error introduced at the device-level
propagates through circuit-level to system-level simulations in
an unpredictable manner. In this work, we propose an algorithm
to fit a certain amount of variability to an existing physics-based
analytical model (Stanford-PKU model). The extent of
variability exhibited by the device is fitted to the model in a
manner agnostic to the cause of variability. Further, the model is
modified to better reproduce the variations observed in a device.
The model, fitted with variability can well reproduce cycle-to-cycle,
as well as device-to-device variations. The significance of
integrating variability into RRAM models is underscored using
a sensing example.
Over the last decades, parallel computing has gained more and more attention not only in science, but also in scientific research and industry. The reason for this purpose is that common industrial applications become increasingly performance-demanding. Conventional single-core designs cannot meet the performance requirements any longer because raising the clock frequency is not an option due to reaching technological limitations. As a consequence, the efficient use of (embedded) multi-core CPUs and many-core platforms has become inevitable. 3D surface analysis of objects using white light interferometry presents one of such challenging applications. The goal of this article is to get an impression which absolute run times and which speed-up for an established and parallelized white light interferometry preprocessing algorithm, called Contrast Method, which is possible on an embedded system that works without any operating system. Currently, multi- and many-core systems are still not pervasive architectures in the embedded domain, even if state-of-the-art technologies allow such systems. In order to gain more insights into possible benefits, we decided to use a virtual environment that is able to simulate embedded multi-core as well as many-core systems and that enables running real application code on the designed system. The results show that a significant reduction of the execution times, and thus a significant speed-up, is possible when using a many-core platform, instead of a design that only implements one single core. The algorithm was parallelized for getting maximum performance of the many-core design.
Many applications in scientific computing require solving one or more partial differential equations
(PDEs). For this task, solvers from the class of multigrid methods are known to be amongst the most efficient. An optimal implementation, however, is highly dependent on the specific problem as well as the target hardware. As energy efficiency is a big topic in today's computing centers, energy-efficient platforms such as ARM-based clusters are actively researched. In this work, we present a domain-specific approach, starting with the problem formulation in a domain-specific language (DSL), down to code generation targeting a variety of systems including embedded architectures. Furthermore, we present an approach to simulate embedded architectures to achieve an optimal hardware/software co-design, i.e., an optimal composition of software and hardware modifications. In this context, we use a virtual environment (OVP) that enables the adaptation of multicore models and their simulation in an efficient way. Our approach shows that execution time prediction for ARM-based platforms is possible and feasible but has to be enhanced with more detailed cache and memory models. We substantiate our claims by providing results for the performance prediction of geometric multigrid solvers generated by the ExaStencils framework.
The movement of data between processing and memory units, often referred to as the ‘von Neumann bottleneck’ is the main reason for the degraded performance of contemporary
computing systems. In an effort to overcome this bottleneck,
methods to ‘compute’ at the location of data are being pursued in
many emerging memories, including Resistive RAM (ReRAM).
Although many prior works have pursued addition in memory,
the latency of n-bit addition has not been judiciously optimized,
resulting in O(n) or at best O(log(n)). Computing with three
states can enable carry-free addition and result in a latency
which is independent of operand width (O(1)). In this work, we
propose a method to perform carry-free addition completely in
memory (a storage array, a processing array and their peripheral
circuitry). The proposed technique incurs a latency of 22 memory
cycles, which outperforms other in-memory binary adders for
n > 32. This speed is achieved at the cost of increased peripheral
hardware.
Personalization of gait neuroprosthetics is paramount to ensure their efficacy for users, who experience severe limitations in mobility without an assistive device. Our goal is to develop assistive devices that collaborate with and are tailored to their users, while allowing them to use as much of their existing capabilities as possible. Currently, personalization of devices is challenging, and technological advances are required to achieve this goal. Therefore, this paper presents an overview of challenges and research directions regarding an interface with the peripheral nervous system, an interface with the central nervous system, and the requirements of interface computing architectures. The interface should be modular and adaptable, such that it can provide assistance where it is needed. Novel data processing technology should be developed to allow for real-time processing while accounting for signal variations in the human. Personalized biomechanical models and simulation techniques should be developed to predict assisted walking motions and interactions between the user and the device. Furthermore, the advantages of interfacing with both the brain and the spinal cord or the periphery should be further explored. Technological advances of interface computing architecture should focus on learning on the chip to achieve further personalization. Furthermore, energy consumption should be low to allow for longer use of the neuroprosthesis. In-memory processing combined with resistive random access memory is a promising technology for both. This paper discusses the aforementioned aspects to highlight new directions for future research in gait neuroprosthetics.
The orthogonalization of Boolean functions in disjunctive form, that means a Boolean function formed by sum of products, is a classical problem in the Boolean algebra. In this work, the novel methodology ORTH[ⴱ] of orthogonalization which is an universally valid formula based on the combination technique »orthogonalizing difference-building ⴱ« is presented. Therefore, the technique ⴱ is used to transform Sum of Products into disjoint Sum of Products. The scope of orthogonalization will be solved by a novel formula in a mathematically easier way. By a further procedure step of sorting product terms, a minimized disjoint Sum of Products can be reached. Compared to other methods or heuristics ORTH[ⴱ] provides a faster computation time.
TReMo+: Modeling Ternary and Binary ReRAM-Based Memories With Flexible Write-Verification Mechanisms
(2022)
Non-volatile memory (NVM) technologies offer a number of advantages over conventional memory technologies such as SRAM and DRAM. These include a smaller area requirement, a lower energy requirement for reading and partly for writing, too, and, of course, the non-volatility and especially the qualitative advantage of multi-bit capability. It is expected that memristors based on resistive random access memories (ReRAMs), phase-change memories, or spin-transfer torque random access memories will replace conventional memory technologies in certain areas or complement them in hybrid solutions. To support the design of systems that use NVMs, there is still research to be done on the modeling side of NVMs. In this paper, we focus on multi-bit ternary memories in particular. Ternary NVMs allow the implementation of extremely memory-efficient ternary weights in neural networks, which have sufficiently high accuracy in interference, or they are part of carry-free fast ternary adders. Furthermore, we lay a focus on the technology side of memristive ReRAMs. In this paper, a novel memory model in the circuit level is presented to support the design of systems that profit from ternary data representations. This model considers two read methods of ternary ReRAMs, namely, serial read and parallel read. They are extensively studied and compared in this work, as well as the write-verification method that is often used in NVMs to reduce the device stress and to increase the endurance. In addition, a comprehensive tool for the ternary model was developed, which is capable of performing energy, performance, and area estimation for a given setup. In this work, three case studies were conducted, namely, area cost per trit, excessive parameter selection for the write-verification method, and the assessment of pulse width variation and their energy latency trade-off for the write-verification method in ReRAM.
Redundant number systems (RNS) are a well-known technique to speed up arithmetic circuits. However, in a complete CPU, arithmetic circuits using RNS were only included on subcircuit level e.g. inside the Arithmetic Logic Unit (ALU) for realization of the division. Still, extending this approach to create a CPU with a complete data path based on RNS can be beneficial for speeding up data processing, due to avoiding conversions in the ALU between RNS and binary number representations. Therefore, with this paper we present a new CPU architecture called RISC-V3 which is compatible to the RISC-V instruction set, but uses an RNS number representation internally to speed up instruction execution times and therefore increase the system performance. RISC-V is very suitable for RNS because it does not have a flags register which is expensive to calculate when using an RNS. To present reliable performance numbers, arithmetic circuits using RNS were realized in different semiconductor technologies. Moreover, an instruction set simulator was used to estimate system performance for a benchmark suite (Embench). Our results show, that we are up to 81% faster with the RISC-V3 architecture compared to a binary one, depending on the executed benchmark and CMOS technology.
In embedded applications that use neural networks (NNs) for classification tasks, it is important to not only minimize the power consumption of the NN calculation, but of the whole system. Optimization approaches for individual parts exist, such as quantization of the NN or analog calculation of arithmetic operations. However, there is no holistic approach for a complete embedded system design that is generic enough in the design process to be used for different applications, but specific in the hardware implementation to waste no energy for a given application. Therefore, we present a novel framework that allows an end-to-end ASIC implementation of a low-power hardware for time series classification using NNs. This includes a neural architecture search (NAS), which optimizes the NN configuration for accuracy and energy efficiency at the same time. This optimization targets a custom designed hardware architecture that is derived from the key properties of time series classification tasks. Additionally, a hardware generation tool is used that creates a complete system from the definition of the NN. This system uses local multi-level RRAM memory as weight and bias storage to avoid external memory access. Exploiting the non-volatility of these devices, such a system can use a power-down mode to save significant energy during the data acquisition process. Detection of atrial fibrillation (AFib) in electrocardiogram (ECG) data is used as an example for evaluation of the framework. It is shown that a reduction of more than 95% of the energy consumption compared to state-of-the-art solutions is achieved.
Computing systems are becoming more and more power-constrained due to unconventional computing requirements like computing on the edge, in-sensor, or simply an insufficient battery. Emerging Non-Volatile Memories are explored to build low-power computing circuits, and adders are one among them. In this work, we propose a low-power adder using a Ferroelectric Tunnel Junction (FTJ). FTJs are two-terminal devices where the data is stored in the polarization state of the device. An FTJ-based majority gate is proposed, which uses a current-mode sensing technique to evaluate the majority of the inputs. By conditionally selecting between the majority and its complement, an XOR operation is implemented, thereby achieving full-adder functionality. Since FTJ-based majority operation is slow, a ternary adder architecture is used to compensate for the speed loss. The ternary adder proposed by us has two stages of full adder and requires O(1) time for n-bit addition. The proposed adder is verified using a simulation in CMOS 130 nm technology. A 32-bit addition can be achieved in 100 μs and consumes 0.78 pJ, which is very power efficient (7.8 nW). The proposed adder can be used in applications where power consumption is crucial, and speed is not a strict requirement.
A Reference-less Sense Amplifier to Sense pA Currents in Ferroelectric Tunnel Junction Memories
(2023)
Ferroelectric Tunnel Junction (FTJ) is an emerging
non-volatile memory technology with increasing applications in
memory and computing. Although the low currents of FTJ
memories is an advantage from energy point of view, sensing of
these memories is a challenge due to the pA read-out currents. In
this work, we propose a Sense Amplifier (SA) specifically designed
for FTJ memory technology. The proposed SA accumulates the
charge at the gate of a read transistor and thus converts the
pA current to hundreds of nA. A Schmitt-Trigger circuit is
then used to differentiate between these two currents by using
the threshold-voltage of the Schmitt-Trigger as reference. The
proposed SA consumes an energy of 82 fJ/bit and also does not
require an absolute voltage/current reference for sensing, thus
conserving on-chip area.