Exploring & Characterizing DDR Memory System Margins
Memories are central to system operation and performance. Designers need a better way to look inside the memory subsystem to ensure the system is optimized for production.
Memories are hot. At least that's what one would surmise from the recent MemCon 2014 in Santa Clara, Calif., where the exhibit floor was crowded and a full day of presentations covered all things memory.
One recurring theme at the conference was whether there is a more efficient way to explore and characterize the margins of a DDR memory subsystem. That's not so easy when the DDR subsystem (DDR controller, PHY and I/O) is embedded within a chip and is responsible for managing the data traffic flowing to and from the processor and external DDR memory. The DDR memory interface normally is the highest bandwidth bus in the system, operating at multi-GHz data rates with read-write timing margins measured in picoseconds.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- DDR5/4/3/2: How Memory Density and Speed Increased with each Generation of DDR
- New AXI Scatter-Gather DMA Core Transfers Streaming Data to/from System Memory
- How Secure DDR Interfaces Protect DRAM from Memory Attacks
- Boost SoC Flexibility: 4 Design Tips for Memory Subsystems with Combo DDR3/4 Interfaces
Latest Blogs
- PCIe Low-Power Validation Challenges and Potential Solutions (PIPE/L1 Substates)
- Rethinking Edge AI Interconnects: Why Multi-Protocol Is the New Standard
- Tidying Up: FIPS-Compliant Secure Zeroization for OTP
- Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP
- Why What Where DIFI and the new version 1.3