Exploring AI / Machine Learning Implementations with Stratus HLS
A lot of AI design is done in software and, while much of it will remain there, increasing numbers of designs are finding their way into hardware. There are multiple reasons for this including the important goals of achieving lower power or higher performance for critical parts of the AI process. Imagine you need dramatically improved rate of object recognition in automated-driving applications.
Implementing an AI application in hardware presents some key challenges for the designer.
- Need to explore multiple algorithms and architectures, typically using a framework such as TensorFlow or Caffe
- Need to qualify power, performance, area, and accuracy trade-offs of various architectures
- Need a rapid path from the models to production silicon
In this article, I'll describe a flow that starts in the TensorFlow environment, moves into abstract C++ targeted at the Stratus HLS flow, and then into a concrete hardware implementation flow.
To read the full article, click here
Related Semiconductor IP
- Bluetooth Low Energy 6.0 Digital IP
- Ultra-low power high dynamic range image sensor
- Flash Memory LDPC Decoder IP Core
- SLM Signal Integrity Monitor
- SD4.x UHSII
Related Blogs
- Enabling ‘Few-Shot Learning’ AI with ReRAM
- Alif Is Creating SoC Solutions for Machine Learning with Cadence and Arm
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
- Scaling AI Infrastructure with Next-Gen Interconnects
Latest Blogs
- MIPI: Powering the Future of Connected Devices
- ESD Protection for an High Voltage Tolerant Driver Circuit in 4nm FinFET Technology
- Designing the AI Factories: Unlocking Innovation with Intelligent IP
- Smarter SoC Design for Agile Teams and Tight Deadlines
- Automotive Reckoning: Industry Leaders Discuss the Race to Redefine Car Development