Powering the Next Wave of AI Inference with the Rambus GDDR6 PHY at 24 Gb/s
Rambus is, once again, leading the way in memory performance solutions with today’s announcement that the Rambus GDDR6 PHY now reaches performance of up to 24 Gigabits per Second (Gb/s), the industry’s highest data rate for GDDR6 memory interfaces!
AI/ML inference models are growing rapidly in both size and sophistication, and because of this we are seeing increasingly powerful hardware deployed at the network edge and in endpoint devices. For inference, memory throughput speed and low latency are critical. GDDR6 memory offers an impressive combination of bandwidth, capacity, latency and power that makes it ideal for these applications.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- Taking a closer look at the Rambus GDDR6 PHY IP Core
- Cadence support for the Open NAND Flash Interface (ONFI) 3.0 controller and PHY IP solution + PCIe Controller IP opening the door for NVM Express support
- Such a small piece of Silicon, so strategic PHY IP
- Is PHY IP really strategic? Just take a look at the various legal offensives running these days...
Latest Blogs
- From GPUs to Memory Pools: Why AI Needs Compute Express Link (CXL)
- Verification of UALink (UAL) and Ultra Ethernet (UEC) Protocols for Scalable HPC/AI Networks using Synopsys VIP
- Enhancing PCIe6.0 Performance: Flit Sequence Numbers and Selective NAK Explained
- Smarter ASICs and SoCs: Unlocking Real-World Connectivity with eFPGA and Data Converters
- RISC-V Takes First Step Toward International Standardization as ISO/IEC JTC1 Grants PAS Submitter Status