Partitioning to optimize AI inference for multi-core platforms
By Rami Drucker, Ceva
EDN (January 8, 2024)
Not so long ago, artificial intelligence (AI) inference at the edge was a novelty easily supported by a single neural processing unit (NPU) IP accelerator embedded in the edge device. Expectations have accelerated rapidly since then. Now we want embedded AI inference to handle multiple cameras, complex scene segmentation, voice recognition with intelligent noise suppression, fusion between multiple sensors, and now very large and complex generative AI models.
Such applications can deliver acceptable throughput for edge products only when run on multi-core AI processors. NPU IP accelerators are already available to meet this need, extending to eight or more parallel cores and able to handle multiple inference tasks in parallel. But how should you partition expected AI inference workloads for your product to take maximum advantage of all that horsepower?
To read the full article, click here
Related Semiconductor IP
- eUSB2V2.0 Controller + PHY IP
- I/O Library with LVDS in SkyWater 90nm
- 50G PON LDPC Encoder/Decoder
- UALink Controller
- RISC-V Debug & Trace IP
Related Articles
- MultiVic: A Time-Predictable RISC-V Multi-Core Processor Optimized for Neural Network Inference
- AI Edge Inference is Totally Different to Data Center
- Why Interlaken is a great choice for architecting chip to chip communications in AI chips
- The Growing Importance of AI Inference and the Implications for Memory Technology
Latest Articles
- ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip Design
- COVERT: Trojan Detection in COTS Hardware via Statistical Activation of Microarchitectural Events
- A Reconfigurable Framework for AI-FPGA Agent Integration and Acceleration
- Veri-Sure: A Contract-Aware Multi-Agent Framework with Temporal Tracing and Formal Verification for Correct RTL Code Generation
- FlexLLM: Composable HLS Library for Flexible Hybrid LLM Accelerator Design