CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
As Jeff Bier has mentioned in several of his recent columns, deep learning algorithms have gained prominence in computer vision and other fields where there's a need to extract insights from ambiguous data. Convolutional neural networks (CNNs) – massively parallel algorithms made up of layers of computation nodes – have shown particularly impressive results on challenging problems that thwart traditional feature-based techniques; when attempting to identify non-uniform objects, for example, or in sub-optimal viewing conditions. However, as with many emerging technologies, much of the R&D work on CNNs is being undertaken on resource-rich PC platforms. CEVA's just-introduced Deep Neural Network (CDNN) software framework aspires to optimize CNN code and data for more modestly equipped embedded systems, specifically those based on the company's latest XM4 vision processor core.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- The CEVA-XM6 Vision Processor Core Boosts Performance for Embedded Deep Learning Applications
- Bringing Power Efficiency to TinyML, ML-DSP and Deep Learning Workloads
- 6 reasons deep learning accelerators need vision processors
- Five Key Techniques to Accelerate Software Bring-Up for Multi-Die Systems
Latest Blogs
- A Comparison on Different AMBA 5 CHI Verification IPs
- Cadence Recognized as TSMC OIP Partner of the Year at 2025 OIP Ecosystem Forum
- Accelerating Development Cycles and Scalable, High-Performance On-Device AI with New Arm Lumex CSS Platform
- Desktop-Quality Ray-Traced Gaming and Intelligent AI Performance on Mobile with New Arm Mali G1-Ultra GPU
- Powering Scale Up and Scale Out with 224G SerDes for UALink and Ultra Ethernet