MoSys combines design, process and test to break the 2 billion accesses per second barrier
MoSys has created a new serial memory—the Bandwidth Engine IC—that leverages a highly efficient 10G serial interface and innovative architecture to perform over 2 billion memory accesses per second. This access rate is necessary to support data rates required by 100GE (Gigabit Ethernet) and 100Gb/s aggregate line cards. The Bandwidth Engine IC contains intelligence in ALUs and memory architectures that accelerates networking operations such as statistics and was designed for use in applications where high data speeds, 10-year expected lifetimes, and government mandated power reductions create restrictive specifications. Bandwidth Engine distinguishes itself relative from traditional networking devices by putting the emphasis on fast, intelligent access which works well in the packet classification applications. This required MoSys to use a highly collaborative design approach. To achieve this access rate, a combination of exacting product definition, tightly designed RTL code, a high speed and low-latency SerDes, the core 1T-SRAM technology developed by MoSys, and innovative layout and packaging design were employed. The result is a device which eases SoC packaging and system design challenges by using a high speed serial interface. Overall system performance is increased while power and cost are reduced by the consolidation of banks of traditional memory devices into one Bandwidth Engine.
To read the full article, click here
Related Semiconductor IP
- ESD Solutions for Multi-Gigabit SerDes in TSMC 28nm
- 25/28/32G Combo SerDes
- 64G SerDes
- 112G SerDes USR & XSR
- Programmable PCIe2/SATA3 SERDES PHY on TSMC CLN28HPC
Related Blogs
- SSD Interfaces and Performance Effects
- What’s on the Horizon for NAND and DRAM?
- DDR3/DDR2 price crossover reached
- Apple iPad: no LPDDR2?
Latest Blogs
- The Growing Importance of PVT Monitoring for Silicon Lifecycle Management
- Unlock early software development for custom RISC-V designs with faster simulation
- HBM4 Boosts Memory Performance for AI Training
- Using AI to Accelerate Chip Design: Dynamic, Adaptive Flows
- Locking When Emulating Xtensa LX Multi-Core on a Xilinx FPGA