Solve SoC Bottlenecks with Smart Local Memory in AI/ML Subsystems
In today’s disaggregated electronics supply chain the (1) application software developer, (2) the ML model developer, (3) the device maker, (4) the SoC design team and (5) the NPU IP vendor often work for as many as five different companies. It can be difficult or impossible for the SoC team to know or predict actual AI/ML workloads and full system behaviors as many as two or three years in advance of the actual deployment. But then how can that SoC team make good choices provisioning compute engines and adequate memory resources for the unknown future without defaulting to “Max TOPS / Min Area”?
There has to be a smarter way to eliminate bottlenecks while determining the optimum local memory for AI/ML subsystems.
To read the full article, click here
Related Semiconductor IP
- SLVS Transceiver in TSMC 28nm
- 0.9V/2.5V I/O Library in TSMC 55nm
- 1.8V/3.3V Multi-Voltage GPIO in TSMC 28nm
- 1.8V/3.3V I/O Library with 5V ODIO & Analog in TSMC 16nm
- ESD Solutions for Multi-Gigabit SerDes in TSMC 28nm
Related Blogs
- High Speed Memory in Smart Phones: MIPI UniPro v1.8 for JEDEC UFS v3.0
- Faster Embedded Smartphone & Tablet Memory Is On The Way
- The Challenges of Video on Smart Mobile Devices
- SSD Interfaces and Performance Effects
Latest Blogs
- Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based
- Industry's First Verification IP for Display Port Automotive Extensions (DP AE)
- IMG DXT GPU: A Game-Changer for Gaming Smartphones
- Rivos and Canonical partner to deliver scalable RISC-V solutions in Data Centers and enable an enterprise-grade Ubuntu experience across Rivos platforms
- ReRAM-Powered Edge AI: A Game-Changer for Energy Efficiency, Cost, and Security