SiFive Intelligence X280 as AI Compute Host: Google Datacenter Case Study
At the AI Hardware Summit in Santa Clara, September 14th, Krste Asanovic, SiFive Co-Founder and Chief Architect, took to the stage with Cliff Young, Google TPU Architect and MLPerf Co-Founder, to reveal how the latest SiFive Intelligence™ X280 processor with the new SiFive Vector Coprocessor Interface Extension (VCIX) is being used as the AI Compute Host to provide flexible programming combined with the Google MXU (systolic matrix multiplier) accelerator in the datacenter. They also talked about why this type of architecture is becoming popular for AI/ML workloads, and how, with this configuration, SiFive provides essential compute capabilities so Google can focus on deep learning SOC development. A video of the talk is expected to be available in early October.
To read the full article, click here
Related Semiconductor IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
- Compact Embedded RISC-V Processor
- Multi-core capable 64-bit RISC-V CPU with vector extensions
- 64 bit RISC-V Multicore Processor with 2048-bit VLEN and AMM
- RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
Related Blogs
- Introducing the Latest SiFive® Intelligence™ X280 Processor Innovation - the Vector Coprocessor Interface Extension (VCIX)
- Case Study: Getting More Functionality from Existing Chips
- Case Study: Choosing the Right Benchmarks for the Job
- Case Study: Choosing the Right Benchmarks for the Job
Latest Blogs
- Enhancing PCIe6.0 Performance: Flit Sequence Numbers and Selective NAK Explained
- Smarter ASICs and SoCs: Unlocking Real-World Connectivity with eFPGA and Data Converters
- RISC-V Takes First Step Toward International Standardization as ISO/IEC JTC1 Grants PAS Submitter Status
- Running Optimized PyTorch Models on Cadence DSPs with ExecuTorch
- PCIe 6.x: Synopsys IP Selected as First Gold System for Compliance Testing