Workload-Specific Hardware Accelerators
Workload-specific hardware accelerators are becoming essential in large data centers for two reasons. One is that general-purpose processing elements cannot keep up with the workload demands or latency requirements. The second is that they need to be extremely efficient due to limited electricity from the grid and the high cost of cooling these devices. Sharad Chole, chief scientist and co-founder of Expedera, talks with Semiconductor Engineering about the role of neural processing units inside AI data centers, tradeoffs between performance and accuracy, and new challenges with chiplet-based multi-die assemblies.
Related Semiconductor IP
- AI inference engine for Audio
- Neural engine IP - AI Inference for the Highest Performing Systems
- Neural engine IP - Tiny and Mighty
- Powerful AI processor
- AI Processor Accelerator
Related Videos
- Analog AI Chips for Energy-Efficient Machine Learning: The Future of AI Hardware?
- Leveraging RISC-V as a Unified Heterogeneous Hardware & Software Platform for Next-Gen AI Chips
- Hardware Innovation in the World's First RISC-V 50 TOPS AI Compute for Mass Production Development
- Frank Schirrmeister on Synopsys’ Upgraded Hardware-Assisted Validation Platforms.