Bringing SOT-MRAM Tech Closer to Cache Memory
By Farrukh Yasin, Van Dai Nguyen, Siddharth Rao and Gouri Sankar Kar  (imec)
EETimes (December 12, 2024)
For many decades, ultrafast and volatile SRAM has been used as embedded cache memory in high-performance compute architectures, where it resides very close to the processor in a multi-level (L1, L2, L3) hierarchical system. Its role is to store frequently used data and instructions for quick retrieval, with L1 being the fastest of all cache memories. SRAM bit density scaling has been slowing down for some time, and bit cells increasingly suffer from standby power issues.
The spin-orbit torque (SOT)-MRAM memory solution has several advantages, such as low standby power consumption, GHz-level switching or write speeds, negligible leakage, practically unlimited endurance, high reliability, and scalability. For these reasons, the industry is increasingly evaluating SOT-MRAM as a promising alternative to SRAM in embedded last-level, cache-memory applications.
To read the full article, click here
Related Semiconductor IP
- MIPI SoundWire I3S Peripheral IP
- Post-Quantum ML-KEM IP Core
- MIPI SoundWire I3S Manager IP
- eDP 2.0 Verification IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
Related White Papers
- How to accelerate memory bandwidth by 50% with ZeroPoint technology
- Multi Chip, Multi Environment Simulation, Bringing Software Closer to Hardware and Saving Money
- Using scheduled cache modeling to reduce memory latencies in multicore DSP designs
- Which DDR SDRAM Memory to Use and When
Latest White Papers
- Attack on a PUF-based Secure Binary Neural Network
- BBOPlace-Bench: Benchmarking Black-Box Optimization for Chip Placement
- FD-SOI: A Cyber-Resilient Substrate Against Laser Fault Injection—The Future Platform for Secure Automotive Electronics
- In-DRAM True Random Number Generation Using Simultaneous Multiple-Row Activation: An Experimental Study of Real DRAM Chips
- SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference