Neural Network Efficiency with Embedded FPGA's
The traditional metrics for evaluating IP are performance, power, and area, commonly abbreviated as PPA. Viewed independently, PPA measures can be difficult to assess. As an example, design constraints that are purely based on performance, without concern for the associated power dissipation and circuit area, are increasingly rare. There is a related set of characteristics of importance, especially given the increasing integration of SoC circuitry associated with deep neural networks (DNN) – namely, the implementation energy and area efficiency, usually represented as a performance per watt measure and a performance per area measure.
The DNN implementation options commonly considered are: a software-programmed (general purpose) microprocessor core, a programmed graphics processing unit (GPU), a field-programmable gate array, and a hard-wired logic block. In 2002, Broderson and Zhang from UC-Berkeley published a Technical Report that described the efficiency of different options, targeting digital signal processing algorithms. [1]
To read the full article, click here
Related Semiconductor IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- Parameterizable compact BCH codec
- 1G BASE-T Ethernet Verification IP
- Network-on-Chip (NoC)
- Microsecond Channel (MSC/MSC-Plus) Controller
Related Blogs
- Embedded Vision: The Road Ahead for Neural Networks and Five Likely Surprises
- FPGAs take on convolutional neural networks
- Synopsys Fields Processor Core for Neural Network Computer Vision Applications
- Reconfigurable redefined with embedded FPGA core IP
Latest Blogs
- What Does a GPU Have to Do With Automotive Security?
- Physical AI at the Edge: A New Chapter in Device Intelligence
- Rivian’s autonomy breakthrough built with Arm: the compute foundation for the rise of physical AI
- AV1 Image File Format Specification Gets an Upgrade with AVIF v1.2.0
- Industry’s First End-to-End eUSB2V2 Demo for Edge AI and AI PCs at CES