CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
As Jeff Bier has mentioned in several of his recent columns, deep learning algorithms have gained prominence in computer vision and other fields where there's a need to extract insights from ambiguous data. Convolutional neural networks (CNNs) – massively parallel algorithms made up of layers of computation nodes – have shown particularly impressive results on challenging problems that thwart traditional feature-based techniques; when attempting to identify non-uniform objects, for example, or in sub-optimal viewing conditions. However, as with many emerging technologies, much of the R&D work on CNNs is being undertaken on resource-rich PC platforms. CEVA's just-introduced Deep Neural Network (CDNN) software framework aspires to optimize CNN code and data for more modestly equipped embedded systems, specifically those based on the company's latest XM4 vision processor core.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- The CEVA-XM6 Vision Processor Core Boosts Performance for Embedded Deep Learning Applications
- Bringing Power Efficiency to TinyML, ML-DSP and Deep Learning Workloads
- 6 reasons deep learning accelerators need vision processors
- The CEVA-MM3101: An Imaging-Optimized DSP Core Swings for an Embedded Vision Home Run
Latest Blogs
- Rambus Announces Industry-Leading Ultra Ethernet Security IP Solutions for AI and HPC
- The Memory Imperative for Next-Generation AI Accelerator SoCs
- Leadership in CAN XL strengthens Bosch’s position in vehicle communication
- Validating UPLI Protocol Across Topologies with Cadence UALink VIP
- Cadence Tapes Out 32GT/s UCIe IP Subsystem on Samsung 4nm Technology