Bringing SOT-MRAM Tech Closer to Cache Memory
By Farrukh Yasin, Van Dai Nguyen, Siddharth Rao and Gouri Sankar Kar (imec)
EETimes (December 12, 2024)
For many decades, ultrafast and volatile SRAM has been used as embedded cache memory in high-performance compute architectures, where it resides very close to the processor in a multi-level (L1, L2, L3) hierarchical system. Its role is to store frequently used data and instructions for quick retrieval, with L1 being the fastest of all cache memories. SRAM bit density scaling has been slowing down for some time, and bit cells increasingly suffer from standby power issues.
The spin-orbit torque (SOT)-MRAM memory solution has several advantages, such as low standby power consumption, GHz-level switching or write speeds, negligible leakage, practically unlimited endurance, high reliability, and scalability. For these reasons, the industry is increasingly evaluating SOT-MRAM as a promising alternative to SRAM in embedded last-level, cache-memory applications.
To read the full article, click here
Related Semiconductor IP
- Bluetooth Low Energy 6.0 Digital IP
- Ultra-low power high dynamic range image sensor
- Flash Memory LDPC Decoder IP Core
- SLM Signal Integrity Monitor
- Digital PUF IP
Related White Papers
- How to accelerate memory bandwidth by 50% with ZeroPoint technology
- Multi Chip, Multi Environment Simulation, Bringing Software Closer to Hardware and Saving Money
- Using scheduled cache modeling to reduce memory latencies in multicore DSP designs
- Which DDR SDRAM Memory to Use and When
Latest White Papers
- How Next-Gen Chips Are Unlocking RISC-V’s Customization Advantage
- Efficient Hardware-Assisted Heap Memory Safety for Embedded RISC-V Systems
- Automatically Retargeting Hardware and Code Generation for RISC-V Custom Instructions
- How Mature-Technology ASICs Can Give You the Edge
- Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY