40G UCIe IP Advantages for AI Applications
By Aparna Tarde, Sr. Technical Product Manager and Manuel Mota, Sr. Product Manager - Synopsys
The deployment of generative AI in the devices we use every day is growing, driving demand for large language model sizes and higher compute performance. According to a presentation by Yole Group at the 2024 OCP Regional Summit, ‘For training on GPT-3 with 175 billion parameters, we estimate that between 6,000 and 8,000 A100 GPUs would have required up to a month to complete.’ Growing HPC and AI compute performance requirements are driving the deployment of multi-die designs, integrating multiple heterogeneous or homogenous dies in a single standard or advanced package. For AI workloads to be processed reliably at a fast rate, the die-to-die interface in multi-die designs must be robust, low latency, and most importantly high bandwidth. This article outlines the need for 40G UCIe IP in AI data center chips leveraging multi-die designs.
Related Semiconductor IP
- UCIe Die-to-Die Chiplet Controller
- UCIe Controller baseline for Streaming Protocols
- UCIe based 8-bit 48-Gsps Transceiver (ADC/DAC/PLL/UCIe)
- UCIe based 12-bit 12-Gsps Transceiver (ADC/DAC/PLL/UCIe)
- D2D UCIe
Related White Papers
- Selection of FPGAs and GPUs for AI Based Applications
- Menta eFPGA IP for Edge AI
- Context Based Clock Gating Technique For Low Power Designs of IoT Applications - A DesignWare IP Case Study
- The Ideal Solution for AI Applications - Speedcore eFPGA
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference