Exploring AI / Machine Learning Implementations with Stratus HLS
A lot of AI design is done in software and, while much of it will remain there, increasing numbers of designs are finding their way into hardware. There are multiple reasons for this including the important goals of achieving lower power or higher performance for critical parts of the AI process. Imagine you need dramatically improved rate of object recognition in automated-driving applications.
Implementing an AI application in hardware presents some key challenges for the designer.
- Need to explore multiple algorithms and architectures, typically using a framework such as TensorFlow or Caffe
- Need to qualify power, performance, area, and accuracy trade-offs of various architectures
- Need a rapid path from the models to production silicon
In this article, I'll describe a flow that starts in the TensorFlow environment, moves into abstract C++ targeted at the Stratus HLS flow, and then into a concrete hardware implementation flow.
To read the full article, click here
Related Semiconductor IP
- eDP 2.0 Verification IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
- LLM AI IP Core
- Post-Quantum Digital Signature IP Core
- Compact Embedded RISC-V Processor
Related Blogs
- Enabling ‘Few-Shot Learning’ AI with ReRAM
- Designing the AI Factories: Unlocking Innovation with Intelligent IP
- Accelerating RTL Design with Agentic AI: A Multi-Agent LLM-Driven Approach
- Unleashing Leading On-Device AI Performance and Efficiency with New Arm C1 CPU Cluster
Latest Blogs
- From GPUs to Memory Pools: Why AI Needs Compute Express Link (CXL)
- Verification of UALink (UAL) and Ultra Ethernet (UEC) Protocols for Scalable HPC/AI Networks using Synopsys VIP
- Enhancing PCIe6.0 Performance: Flit Sequence Numbers and Selective NAK Explained
- Smarter ASICs and SoCs: Unlocking Real-World Connectivity with eFPGA and Data Converters
- RISC-V Takes First Step Toward International Standardization as ISO/IEC JTC1 Grants PAS Submitter Status