CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
As Jeff Bier has mentioned in several of his recent columns, deep learning algorithms have gained prominence in computer vision and other fields where there's a need to extract insights from ambiguous data. Convolutional neural networks (CNNs) – massively parallel algorithms made up of layers of computation nodes – have shown particularly impressive results on challenging problems that thwart traditional feature-based techniques; when attempting to identify non-uniform objects, for example, or in sub-optimal viewing conditions. However, as with many emerging technologies, much of the R&D work on CNNs is being undertaken on resource-rich PC platforms. CEVA's just-introduced Deep Neural Network (CDNN) software framework aspires to optimize CNN code and data for more modestly equipped embedded systems, specifically those based on the company's latest XM4 vision processor core.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- The CEVA-XM6 Vision Processor Core Boosts Performance for Embedded Deep Learning Applications
- Bringing Power Efficiency to TinyML, ML-DSP and Deep Learning Workloads
- Accelerating Machine Learning Deployment with CEVA Deep Neural Network (CDNN)
- 6 reasons deep learning accelerators need vision processors
Latest Blogs
- Unlock early software development for custom RISC-V designs with faster simulation
- HBM4 Boosts Memory Performance for AI Training
- Using AI to Accelerate Chip Design: Dynamic, Adaptive Flows
- Locking When Emulating Xtensa LX Multi-Core on a Xilinx FPGA
- Design IP Market Increased by All-time-high: 20% in 2024!