Scaling Performance In AI Systems
Improving performance in AI designs involves the usual tradeoffs in power and performance, but achieving a good balance is becoming much more challenging. There is more data to process, new heterogeneous architectures to contend with, and much higher utilization rates. Andy Nightingale, vice president of product management and marketing at Arteris, talks with Semiconductor Engineering about where the bottlenecks are, how to minimize them in data-intensive workloads across a variety of vertical markets, and why networks on chip are essential to moving and managing this data and getting chips to market on time.
To read the full article, click here
Related Semiconductor IP
- HBM4 PHY IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- MIPI D-PHY and FPD-Link (LVDS) Combinational Transmitter for TSMC 22nm ULP
- HBM4 Controller IP
- IPSEC AES-256-GCM (Standalone IPsec)
Related Videos
- Webinar: Unpacking System Performance – Supercharge Your Systems with Lossless Compression IPs
- Arm: Scaling AI Compute from Edge to Cloud
- Hardware Innovation in the World's First RISC-V 50 TOPS AI Compute for Mass Production Development
- Baya Systems: CEO Sailesh Kumar on Scaling Up
Latest Videos
- Paving the Road to Datacenter-Scale RISC-V
- Enhancing Data Center Architectures with PCIe® Retimers, Redrivers and Switches
- How UCIe 3.0 Redefining Chiplet Architecture: From Protocol to Platform
- Teradyne Testimonial: Silicon Creations' 16nm SerDes Enables Fastest TTM and Most Cost-Effective Teradyne ASIC Development To-Date
- Webinar: Unpacking System Performance – Supercharge Your Systems with Lossless Compression IPs