Rethinking embedded memory
Adam Kablanian, CEO, Memoir Systems
EETimes (1/8/2012 9:07 PM EST)
It’s no secret that SoC architects have always wanted more on-chip memory. In fact, it’s not uncommon for SoCs to include hundreds of integrated memory cores. To satisfy this historical demand, embedded memory vendors made design choices that favored memory capacity at the expense of memory performance. Over the years, their circuit designers have made memories denser by shrinking transistors and packing them closer and closer together. In short, they defied layout design rules in order to reduce bit cell area, and now we must deal with the performance implications.
Today, due to faster processer speeds, parallel architectures, and especially multi-core processing, on-chip memory performance requirements are skyrocketing. SoC architects now need even faster memories. However, embedded memories can no longer be clocked as fast as processors or other logic on the same chip and this is causing performance bottlenecks which now pose one of the biggest challenges to new SoC product designs.
To read the full article, click here
Related Semiconductor IP
- PUF FPGA-Xilinx Premium with key wrap
- ASIL-B Ready PUF Hardware Premium with key wrap and certification support
- ASIL-B Ready PUF Hardware Base
- PUF Software Premium with key wrap and certification support
- PUF Hardware Premium with key wrap and certification support
Related White Papers
- NAND Flash memory in embedded systems
- Advanced Power Management in Embedded Memory Subsystems
- Memory solution addressing power and security problems in embedded designs
- RRAM: A New Approach to Embedded Memory
Latest White Papers
- e-GPU: An Open-Source and Configurable RISC-V Graphic Processing Unit for TinyAI Applications
- How to design secure SoCs, Part II: Key Management
- Seven Key Advantages of Implementing eFPGA with Soft IP vs. Hard IP
- Hardware vs. Software Implementation of Warp-Level Features in Vortex RISC-V GPU
- Data Movement Is the Energy Bottleneck of Today’s SoCs