Exploring AI / Machine Learning Implementations with Stratus HLS
A lot of AI design is done in software and, while much of it will remain there, increasing numbers of designs are finding their way into hardware. There are multiple reasons for this including the important goals of achieving lower power or higher performance for critical parts of the AI process. Imagine you need dramatically improved rate of object recognition in automated-driving applications.
Implementing an AI application in hardware presents some key challenges for the designer.
- Need to explore multiple algorithms and architectures, typically using a framework such as TensorFlow or Caffe
- Need to qualify power, performance, area, and accuracy trade-offs of various architectures
- Need a rapid path from the models to production silicon
In this article, I'll describe a flow that starts in the TensorFlow environment, moves into abstract C++ targeted at the Stratus HLS flow, and then into a concrete hardware implementation flow.
To read the full article, click here
Related Semiconductor IP
- MIPI CSI-2 IP
- PCIe Gen 7 Verification IP
- WIFI 2.4G/5G Low Power Wakeup Radio IP
- Radar IP
- WIFI 11AX IP
Related Blogs
- The Wonders of Machine Learning: Tackling Lint Debug Quickly with Root-Cause Analysis (Part 3)
- Alif Is Creating SoC Solutions for Machine Learning with Cadence and Arm
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
- Scaling Out Deep Learning (DL) Inference and Training: Addressing Bottlenecks with Storage, Networking with RISC-V CPUs
Latest Blogs
- Unlock early software development for custom RISC-V designs with faster simulation
- HBM4 Boosts Memory Performance for AI Training
- Using AI to Accelerate Chip Design: Dynamic, Adaptive Flows
- Locking When Emulating Xtensa LX Multi-Core on a Xilinx FPGA
- Design IP Market Increased by All-time-high: 20% in 2024!