Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- General use, integer-N 4GHz Hybrid Phase Locked Loop on TSMC 28HPC
- JPEG XL Encoder
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
- ML-KEM Key Encapsulation & ML-DSA Digital Signature Engine
- MIPI SoundWire I3S Peripheral IP
Related Blogs
- Cadence Powers AI Infra Summit '25: Memory, Interconnect, and Interface Focus
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?
- Apple iPhone 6S: LPDDR4 arrives at Apple
Latest Blogs
- How Alternate Geometry Processing Enables Better Multi-Core GPU Scaling
- Three Ethernet Design Challenges in Industrial Automation
- Neuromorphic Computing: A Practical Path to Ultra-Efficient Edge Artificial Intelligence
- Silicon IP for the Final Frontier
- Maximizing SoC Longevity with PCIe 3.0: A Designer’s Guide