CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
As Jeff Bier has mentioned in several of his recent columns, deep learning algorithms have gained prominence in computer vision and other fields where there's a need to extract insights from ambiguous data. Convolutional neural networks (CNNs) – massively parallel algorithms made up of layers of computation nodes – have shown particularly impressive results on challenging problems that thwart traditional feature-based techniques; when attempting to identify non-uniform objects, for example, or in sub-optimal viewing conditions. However, as with many emerging technologies, much of the R&D work on CNNs is being undertaken on resource-rich PC platforms. CEVA's just-introduced Deep Neural Network (CDNN) software framework aspires to optimize CNN code and data for more modestly equipped embedded systems, specifically those based on the company's latest XM4 vision processor core.
To read the full article, click here
Related Semiconductor IP
Related Blogs
- The CEVA-XM6 Vision Processor Core Boosts Performance for Embedded Deep Learning Applications
- Bringing Power Efficiency to TinyML, ML-DSP and Deep Learning Workloads
- 6 reasons deep learning accelerators need vision processors
- Five Key Techniques to Accelerate Software Bring-Up for Multi-Die Systems
Latest Blogs
- MIPI MPHY 6.0: Enabling Next-Generation UFS Performance
- How Does Crocodile Dundee Relate to AI Inference?
- SiFive Celebrates 10 Years as Your Trusted Partner for RISC-V IP Innovation
- MIPI: Powering the Future of Connected Devices
- ESD Protection for an High Voltage Tolerant Driver Circuit in 4nm FinFET Technology