Exploring AI / Machine Learning Implementations with Stratus HLS
A lot of AI design is done in software and, while much of it will remain there, increasing numbers of designs are finding their way into hardware. There are multiple reasons for this including the important goals of achieving lower power or higher performance for critical parts of the AI process. Imagine you need dramatically improved rate of object recognition in automated-driving applications.
Implementing an AI application in hardware presents some key challenges for the designer.
- Need to explore multiple algorithms and architectures, typically using a framework such as TensorFlow or Caffe
- Need to qualify power, performance, area, and accuracy trade-offs of various architectures
- Need a rapid path from the models to production silicon
In this article, I'll describe a flow that starts in the TensorFlow environment, moves into abstract C++ targeted at the Stratus HLS flow, and then into a concrete hardware implementation flow.
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- Bluetooth Low Energy 6.0 Digital IP
- Verification IP for Ultra Ethernet (UEC)
- MIPI SWI3S Manager Core IP
- Ultra-low power high dynamic range image sensor
Related Blogs
- Enabling ‘Few-Shot Learning’ AI with ReRAM
- Alif Is Creating SoC Solutions for Machine Learning with Cadence and Arm
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
- Boosting AI Performance with CXL
Latest Blogs
- How is RISC-V’s open and customizable design changing embedded systems?
- Imagination GPUs now support Vulkan 1.4 and Android 16
- From "What-If" to "What-Is": Cadence IP Validation for Silicon Platform Success
- Accelerating RTL Design with Agentic AI: A Multi-Agent LLM-Driven Approach
- UEC-CBFC: Credit-Based Flow Control for Next-Gen Ethernet in AI and HPC