Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- Parameterizable compact BCH codec
- 1G BASE-T Ethernet Verification IP
- Network-on-Chip (NoC)
- Microsecond Channel (MSC/MSC-Plus) Controller
Related Blogs
- Cadence Powers AI Infra Summit '25: Memory, Interconnect, and Interface Focus
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?
- Apple iPhone 6S: LPDDR4 arrives at Apple
Latest Blogs
- What Does a GPU Have to Do With Automotive Security?
- Physical AI at the Edge: A New Chapter in Device Intelligence
- Rivian’s autonomy breakthrough built with Arm: the compute foundation for the rise of physical AI
- AV1 Image File Format Specification Gets an Upgrade with AVIF v1.2.0
- Industry’s First End-to-End eUSB2V2 Demo for Edge AI and AI PCs at CES