Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- 25MHz to 4.0GHz Fractional-N RC PLL Synthesizer on TSMC 3nm N3P
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
Related Blogs
- Cadence Powers AI Infra Summit '25: Memory, Interconnect, and Interface Focus
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?
- Apple iPhone 6S: LPDDR4 arrives at Apple
Latest Blogs
- Cadence Powers AI Infra Summit '25: Memory, Interconnect, and Interface Focus
- Integrating TDD Into the Product Development Lifecycle
- The Hidden Threat in Analog IC Migration: Why Electromigration rules can make or break your next tapeout
- MIPI CCI over I3C: Faster Camera Control for SoC Architects
- aTENNuate: Real-Time Audio Denoising