Memory Systems for AI: Part 1
There has been quite a lot of recent news about domain-specific processors that are being designed for the artificial intelligence (AI) market. Interestingly, many of the techniques used today in modern AI chips and applications have actually been around for several decades. However, neural networks didn’t really take off during the last wave of interest in AI that spanned the 1980s and 1990s. The question is why.
The chart above provides some insight as to why AI technology remained relatively static for so many years. Back in the 1980s and 1990s, processors (CPUs) simply weren’t fast enough to adequately handle AI applications. In addition, memory performance wasn’t yet good enough to enable neural networks and modern techniques to displace conventional approaches. Consequently, conventional approaches remained popular in the above-mentioned timeframe.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- Innovative Memory Architectures for AI
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 1
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 2
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 3
Latest Blogs
- A Comparison on Different AMBA 5 CHI Verification IPs
- Cadence Recognized as TSMC OIP Partner of the Year at 2025 OIP Ecosystem Forum
- Accelerating Development Cycles and Scalable, High-Performance On-Device AI with New Arm Lumex CSS Platform
- Desktop-Quality Ray-Traced Gaming and Intelligent AI Performance on Mobile with New Arm Mali G1-Ultra GPU
- Powering Scale Up and Scale Out with 224G SerDes for UALink and Ultra Ethernet