Rethinking embedded memory
Adam Kablanian, CEO, Memoir Systems
EETimes (1/8/2012 9:07 PM EST)
It’s no secret that SoC architects have always wanted more on-chip memory. In fact, it’s not uncommon for SoCs to include hundreds of integrated memory cores. To satisfy this historical demand, embedded memory vendors made design choices that favored memory capacity at the expense of memory performance. Over the years, their circuit designers have made memories denser by shrinking transistors and packing them closer and closer together. In short, they defied layout design rules in order to reduce bit cell area, and now we must deal with the performance implications.
Today, due to faster processer speeds, parallel architectures, and especially multi-core processing, on-chip memory performance requirements are skyrocketing. SoC architects now need even faster memories. However, embedded memories can no longer be clocked as fast as processors or other logic on the same chip and this is causing performance bottlenecks which now pose one of the biggest challenges to new SoC product designs.
To read the full article, click here
Related Semiconductor IP
- xSPI Multiple Bus Memory Controller
- MIPI CSI-2 IP
- PCIe Gen 7 Verification IP
- WIFI 2.4G/5G Low Power Wakeup Radio IP
- Radar IP
Related White Papers
- NAND Flash memory in embedded systems
- Advanced Power Management in Embedded Memory Subsystems
- Memory solution addressing power and security problems in embedded designs
- RRAM: A New Approach to Embedded Memory