Rethinking embedded memory
Adam Kablanian, CEO, Memoir Systems
EETimes (1/8/2012 9:07 PM EST)
It’s no secret that SoC architects have always wanted more on-chip memory. In fact, it’s not uncommon for SoCs to include hundreds of integrated memory cores. To satisfy this historical demand, embedded memory vendors made design choices that favored memory capacity at the expense of memory performance. Over the years, their circuit designers have made memories denser by shrinking transistors and packing them closer and closer together. In short, they defied layout design rules in order to reduce bit cell area, and now we must deal with the performance implications.
Today, due to faster processer speeds, parallel architectures, and especially multi-core processing, on-chip memory performance requirements are skyrocketing. SoC architects now need even faster memories. However, embedded memories can no longer be clocked as fast as processors or other logic on the same chip and this is causing performance bottlenecks which now pose one of the biggest challenges to new SoC product designs.
To read the full article, click here
Related Semiconductor IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
- RISC-V Debug & Trace IP
Related Articles
- NAND Flash memory in embedded systems
- Advanced Power Management in Embedded Memory Subsystems
- Memory solution addressing power and security problems in embedded designs
- RRAM: A New Approach to Embedded Memory
Latest Articles
- COVERT: Trojan Detection in COTS Hardware via Statistical Activation of Microarchitectural Events
- A Reconfigurable Framework for AI-FPGA Agent Integration and Acceleration
- Veri-Sure: A Contract-Aware Multi-Agent Framework with Temporal Tracing and Formal Verification for Correct RTL Code Generation
- FlexLLM: Composable HLS Library for Flexible Hybrid LLM Accelerator Design
- Secure Multi-Path Routing with All-or-Nothing Transform for Network-on-Chip Architectures