The role of cache in AI processor design
By Frank Schirrmeister, Arteris
EDN (March 22, 2024)
Artificial intelligence (AI) is making its presence felt everywhere these days, from the data centers at the Internet’s core to sensors and handheld devices like smartphones at the Internet’s edge and every point in between, such as autonomous robots and vehicles. For the purposes of this article, we recognize the term AI to embrace machine learning and deep learning.
There are two main aspects to AI: training, which is predominantly performed in data centers, and inferencing, which may be performed anywhere from the cloud down to the humblest AI-equipped sensor.
AI is a greedy consumer of two things: computational processing power and data. In the case of processing power, OpenAI, the creator of ChatGPT, published the report AI and Compute, showing that since 2012, the amount of compute used in large AI training runs has doubled every 3.4 months with no indication of slowing down.
With respect to memory, a large generative AI (GenAI) model like ChatGPT-4 may have more than a trillion parameters, all of which need to be easily accessible in a way that allows to handle numerous requests simultaneously. In addition, one needs to consider the vast amounts of data that need to be streamed and processed.
To read the full article, click here
Related Semiconductor IP
- Sine Wave Frequency Generator
- CAN XL Verification IP
- Rad-Hard GPIO, ODIO & LVDS in SkyWater 90nm
- 1.22V/1uA Reference voltage and current source
- 1.2V SLVS Transceiver in UMC 110nm
Related White Papers
- AI, and the Real Capacity Crisis in Chip Design
- MIPI in next generation of AI IoT devices at the edge
- The Role of Interconnection in the Evolution of Advanced Packaging Technology
- Understanding the Importance of Prerequisites in the VLSI Physical Design Stage
Latest White Papers
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs
- Balancing Power and Performance With Task Dependencies in Multi-Core Systems
- LLM Inference with Codebook-based Q4X Quantization using the Llama.cpp Framework on RISC-V Vector CPUs
- PCIe 5.0: The universal high-speed interconnect for High Bandwidth and Low Latency Applications Design Challenges & Solutions
- Basilisk: A 34 mm2 End-to-End Open-Source 64-bit Linux-Capable RISC-V SoC in 130nm BiCMOS