Focus on Memory at AI Hardware Summit
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.
In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:
To read the full article, click here
Related Semiconductor IP
- Multi-channel, multi-rate Ethernet aggregator - 10G to 400G AX (e.g., AI)
- Multi-channel, multi-rate Ethernet aggregator - 10G to 800G DX
- 200G/400G/800G Ethernet PCS/FEC
- 50G/100G MAC/PCS/FEC
- 25G/10G/SGMII/ 1000BASE-X PCS and MAC
Related Blogs
- A Focus on Mission-Critical Defense Solutions at GOMACTech
- Apple iPhone 6S: LPDDR4 arrives at Apple
- And Then There Were Three: GLOBALFOUNDRIES Drops 7nm to Focus on Other Geometries
- TSMC Looking At Buying A Memory Company