Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- NFC wireless interface supporting ISO14443 A and B with EEPROM on SMIC 180nm
- DDR5 MRDIMM PHY and Controller
- RVA23, Multi-cluster, Hypervisor and Android
- HBM4E PHY and controller
- LZ4/Snappy Data Compressor
Related Blogs
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- High Bandwidth Memory (HBM) at the AI Crossroads: Customization or Standardization?
- Apple iPhone 6S: LPDDR4 arrives at Apple
- And Then There Were Three: GLOBALFOUNDRIES Drops 7nm to Focus on Other Geometries
Latest Blogs
- lowRISC Tackles Post-Quantum Cryptography Challenges through Research Collaborations
- How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification
- Programmable Hardware Delivers 10,000X Improvement in Verification Speed over Software for Forward Error Correction
- The Integrated Design Challenge: Developing Chip, Software, and System in Unison
- Introducing Mi-V RV32 v4.0 Soft Processor: Enhanced RISC-V Power