Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- Very Low Latency BCH Codec
- 5G-NTN Modem IP for Satellite User Terminals
- 400G UDP/IP Hardware Protocol Stack
- AXI-S Protocol Layer for UCIe
- HBM4E Controller IP
Related Blogs
- Cadence Powers AI Infra Summit '25: Memory, Interconnect, and Interface Focus
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?
- Apple iPhone 6S: LPDDR4 arrives at Apple
Latest Blogs
- Embedded Security explained: Post-Quantum Cryptography (PQC) for embedded Systems
- Accreditation Without Compromise: Making eFPGA Assurable for Decades
- Synopsys Delivers First Complete UFS 5.0 and M‑PHY v6.0 IP Solution for Next‑Gen Storage
- World First: Synopsys MACsec IP Receives ISO/PAS 8800 Certification for Automotive and Physical AI Security
- Last-level cache has become a critical SoC design element