Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- 1.6T/3.2T Multi-Channel MACsec Engine with TDM Interface (MACsec-IP-364)
- 100G MAC and PCS core
- xSPI + eMMC Combo PHY IP
- NavIC LDPC Decoder
Related Blogs
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?
- Apple iPhone 6S: LPDDR4 arrives at Apple
- And Then There Were Three: GLOBALFOUNDRIES Drops 7nm to Focus on Other Geometries
Latest Blogs
- Morgan State University (MSU) Leveraging Intel 16 and the Cadence Tool Flow for Academic Chip Tapeout
- Securing the Future of Terabit Ethernet: Introducing the Rambus Multi-Channel Engine MACsec-IP-364 (+363)
- Why Weebit’s IP Licensing Model Matters
- Arasan’s xSPI/eMMC5.1 PHY: Unified Dual-Mode Physical Layer IP
- Evolution of CXL PBR Switch in the CXL Fabric