Memory Systems for AI: Part 1
There has been quite a lot of recent news about domain-specific processors that are being designed for the artificial intelligence (AI) market. Interestingly, many of the techniques used today in modern AI chips and applications have actually been around for several decades. However, neural networks didn’t really take off during the last wave of interest in AI that spanned the 1980s and 1990s. The question is why.
The chart above provides some insight as to why AI technology remained relatively static for so many years. Back in the 1980s and 1990s, processors (CPUs) simply weren’t fast enough to adequately handle AI applications. In addition, memory performance wasn’t yet good enough to enable neural networks and modern techniques to displace conventional approaches. Consequently, conventional approaches remained popular in the above-mentioned timeframe.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- Innovative Memory Architectures for AI
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 1
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 2
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 3
Latest Blogs
- What Does a GPU Have to Do With Automotive Security?
- Physical AI at the Edge: A New Chapter in Device Intelligence
- Rivian’s autonomy breakthrough built with Arm: the compute foundation for the rise of physical AI
- AV1 Image File Format Specification Gets an Upgrade with AVIF v1.2.0
- Industry’s First End-to-End eUSB2V2 Demo for Edge AI and AI PCs at CES